Jan 21 13:08:58 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 13:08:58 crc restorecon[4754]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:08:58 crc restorecon[4754]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 13:08:59 crc kubenswrapper[4959]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:08:59 crc kubenswrapper[4959]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 13:08:59 crc kubenswrapper[4959]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:08:59 crc kubenswrapper[4959]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:08:59 crc kubenswrapper[4959]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 13:08:59 crc kubenswrapper[4959]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.146358 4959 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149452 4959 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149476 4959 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149482 4959 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149489 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149501 4959 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149506 4959 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149511 4959 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149516 4959 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149520 4959 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149524 4959 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149528 4959 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149532 4959 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149536 4959 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149541 4959 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149546 4959 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149551 4959 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149555 4959 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149559 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149564 4959 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149569 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149573 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149577 4959 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149583 4959 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149587 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149592 4959 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149596 4959 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149600 4959 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149605 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149611 4959 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149617 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149622 4959 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149627 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149633 4959 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149640 4959 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149647 4959 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149651 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149657 4959 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149662 4959 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149667 4959 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149671 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149676 4959 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149680 4959 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149686 4959 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149690 4959 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149695 4959 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149699 4959 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149703 4959 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149707 4959 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149710 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149714 4959 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149717 4959 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149720 4959 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149724 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149727 4959 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149731 4959 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149734 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149738 4959 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149741 4959 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149746 4959 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149749 4959 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149752 4959 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149756 4959 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149759 4959 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149763 4959 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149766 4959 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149769 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149773 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149776 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149780 4959 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149783 4959 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.149786 4959 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149870 4959 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149880 4959 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149888 4959 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149894 4959 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149900 4959 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149905 4959 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149911 4959 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149918 4959 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149922 4959 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149927 4959 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149933 4959 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149938 4959 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149944 4959 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149949 4959 flags.go:64] FLAG: --cgroup-root="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149954 4959 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149959 4959 flags.go:64] FLAG: --client-ca-file="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149964 4959 flags.go:64] FLAG: --cloud-config="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149969 4959 flags.go:64] FLAG: --cloud-provider="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149973 4959 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149981 4959 flags.go:64] FLAG: --cluster-domain="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149986 4959 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149991 4959 flags.go:64] FLAG: --config-dir="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.149996 4959 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150002 4959 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150010 4959 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150016 4959 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150021 4959 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150026 4959 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150031 4959 flags.go:64] FLAG: --contention-profiling="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150035 4959 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150039 4959 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150043 4959 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150047 4959 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150054 4959 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150058 4959 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150062 4959 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150066 4959 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150070 4959 flags.go:64] FLAG: --enable-server="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150075 4959 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150081 4959 flags.go:64] FLAG: --event-burst="100" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150085 4959 flags.go:64] FLAG: --event-qps="50" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150106 4959 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150111 4959 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150115 4959 flags.go:64] FLAG: --eviction-hard="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150121 4959 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150125 4959 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150129 4959 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150133 4959 flags.go:64] FLAG: --eviction-soft="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150138 4959 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150142 4959 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150146 4959 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150152 4959 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150159 4959 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150163 4959 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150168 4959 flags.go:64] FLAG: --feature-gates="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150174 4959 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150179 4959 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150183 4959 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150188 4959 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150193 4959 flags.go:64] FLAG: --healthz-port="10248" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150197 4959 flags.go:64] FLAG: --help="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150201 4959 flags.go:64] FLAG: --hostname-override="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150205 4959 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150209 4959 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150213 4959 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150217 4959 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150221 4959 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150225 4959 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150229 4959 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150233 4959 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150238 4959 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150242 4959 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150246 4959 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150250 4959 flags.go:64] FLAG: --kube-reserved="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150254 4959 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150258 4959 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150262 4959 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150267 4959 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150271 4959 flags.go:64] FLAG: --lock-file="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150275 4959 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150279 4959 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150284 4959 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150291 4959 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150296 4959 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150300 4959 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150304 4959 flags.go:64] FLAG: --logging-format="text" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150308 4959 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150313 4959 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150316 4959 flags.go:64] FLAG: --manifest-url="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150320 4959 flags.go:64] FLAG: --manifest-url-header="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150327 4959 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150331 4959 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150337 4959 flags.go:64] FLAG: --max-pods="110" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150341 4959 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150345 4959 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150349 4959 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150353 4959 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150357 4959 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150360 4959 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150365 4959 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150378 4959 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150382 4959 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150387 4959 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150436 4959 flags.go:64] FLAG: --pod-cidr="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150460 4959 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150469 4959 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150474 4959 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150478 4959 flags.go:64] FLAG: --pods-per-core="0" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150483 4959 flags.go:64] FLAG: --port="10250" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150487 4959 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150491 4959 flags.go:64] FLAG: --provider-id="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150495 4959 flags.go:64] FLAG: --qos-reserved="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150499 4959 flags.go:64] FLAG: --read-only-port="10255" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150504 4959 flags.go:64] FLAG: --register-node="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150508 4959 flags.go:64] FLAG: --register-schedulable="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150512 4959 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150527 4959 flags.go:64] FLAG: --registry-burst="10" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150531 4959 flags.go:64] FLAG: --registry-qps="5" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150535 4959 flags.go:64] FLAG: --reserved-cpus="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150539 4959 flags.go:64] FLAG: --reserved-memory="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150545 4959 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150550 4959 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150554 4959 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150558 4959 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150562 4959 flags.go:64] FLAG: --runonce="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150566 4959 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150570 4959 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150574 4959 flags.go:64] FLAG: --seccomp-default="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150578 4959 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150582 4959 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150587 4959 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150591 4959 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150595 4959 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150599 4959 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150603 4959 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150607 4959 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150611 4959 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150616 4959 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150620 4959 flags.go:64] FLAG: --system-cgroups="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150624 4959 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150630 4959 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150634 4959 flags.go:64] FLAG: --tls-cert-file="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150638 4959 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150643 4959 flags.go:64] FLAG: --tls-min-version="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150647 4959 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150651 4959 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150655 4959 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150659 4959 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150664 4959 flags.go:64] FLAG: --v="2" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150671 4959 flags.go:64] FLAG: --version="false" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150678 4959 flags.go:64] FLAG: --vmodule="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150683 4959 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.150687 4959 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150788 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150792 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150796 4959 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150800 4959 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150804 4959 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150808 4959 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150811 4959 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150815 4959 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150819 4959 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150823 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150826 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150830 4959 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150833 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150839 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150842 4959 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150846 4959 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150850 4959 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150853 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150857 4959 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150860 4959 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150864 4959 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150868 4959 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150872 4959 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150877 4959 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150881 4959 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150885 4959 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150888 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150893 4959 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150896 4959 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150900 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150903 4959 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150907 4959 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150912 4959 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150917 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150920 4959 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150924 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150927 4959 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150931 4959 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150934 4959 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150938 4959 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150941 4959 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150945 4959 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150950 4959 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150955 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150960 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150965 4959 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150971 4959 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150975 4959 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150980 4959 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150984 4959 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150988 4959 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150992 4959 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.150996 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151001 4959 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151007 4959 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151013 4959 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151016 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151020 4959 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151024 4959 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151028 4959 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151032 4959 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151035 4959 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151039 4959 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151042 4959 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151046 4959 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151049 4959 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151052 4959 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151056 4959 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151060 4959 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151064 4959 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.151067 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.151080 4959 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.161593 4959 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.161655 4959 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161771 4959 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161786 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161790 4959 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161795 4959 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161800 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161805 4959 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161810 4959 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161814 4959 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161818 4959 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161822 4959 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161827 4959 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161831 4959 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161835 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161840 4959 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161845 4959 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161850 4959 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161857 4959 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161864 4959 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161869 4959 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161874 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161879 4959 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161883 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161912 4959 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161918 4959 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161923 4959 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161928 4959 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161934 4959 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161938 4959 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161943 4959 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161947 4959 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161952 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161958 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161964 4959 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161972 4959 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161977 4959 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161982 4959 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161988 4959 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.161996 4959 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162001 4959 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162006 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162010 4959 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162015 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162020 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162024 4959 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162029 4959 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162033 4959 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162037 4959 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162042 4959 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162046 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162050 4959 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162055 4959 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162059 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162065 4959 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162071 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162075 4959 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162080 4959 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162085 4959 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162117 4959 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162122 4959 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162126 4959 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162131 4959 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162136 4959 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162142 4959 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162148 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162152 4959 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162160 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162165 4959 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162171 4959 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162176 4959 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162181 4959 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162186 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.162195 4959 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162356 4959 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162368 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162374 4959 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162379 4959 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162384 4959 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162389 4959 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162393 4959 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162398 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162403 4959 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162408 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162412 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162417 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162422 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162427 4959 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162463 4959 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162469 4959 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162474 4959 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162479 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162485 4959 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162489 4959 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162494 4959 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162499 4959 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162503 4959 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162511 4959 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162516 4959 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162521 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162526 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162531 4959 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162535 4959 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162540 4959 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162544 4959 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162549 4959 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162553 4959 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162558 4959 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162562 4959 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162567 4959 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162572 4959 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162578 4959 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162583 4959 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162588 4959 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162592 4959 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162597 4959 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162784 4959 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162790 4959 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162798 4959 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162803 4959 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162810 4959 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162815 4959 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162819 4959 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162825 4959 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162831 4959 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162836 4959 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162841 4959 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162846 4959 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162851 4959 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162857 4959 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162862 4959 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162866 4959 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162871 4959 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162875 4959 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162880 4959 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162884 4959 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162889 4959 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162892 4959 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162896 4959 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162900 4959 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162904 4959 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162908 4959 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162911 4959 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162915 4959 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.162919 4959 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.162927 4959 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.163436 4959 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.166537 4959 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.166638 4959 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.167280 4959 server.go:997] "Starting client certificate rotation" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.167309 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.167539 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-02 00:01:28.548639031 +0000 UTC Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.167708 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.173375 4959 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.175421 4959 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.175552 4959 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.184356 4959 log.go:25] "Validated CRI v1 runtime API" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.196429 4959 log.go:25] "Validated CRI v1 image API" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.198648 4959 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.200964 4959 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-13-04-27-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.201018 4959 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.220259 4959 manager.go:217] Machine: {Timestamp:2026-01-21 13:08:59.219067972 +0000 UTC m=+0.182098535 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:eb8e8451-d560-452c-bda4-2002f2e3fe0b BootID:643a7796-2a45-42fa-a4a4-6600967da7c3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:8b:34 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:8b:34 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:48:e8:cf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ac:e4:4a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7f:22:8c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:38:d4:2d Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:7f:38:58 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:fe:c1:b3:ec:0e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1a:52:92:13:86:e8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.220497 4959 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.220734 4959 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.221601 4959 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.221830 4959 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.221879 4959 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.222136 4959 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.222149 4959 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.222338 4959 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.222408 4959 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.222729 4959 state_mem.go:36] "Initialized new in-memory state store" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.222822 4959 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.223528 4959 kubelet.go:418] "Attempting to sync node with API server" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.223549 4959 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.223574 4959 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.223588 4959 kubelet.go:324] "Adding apiserver pod source" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.223599 4959 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.225404 4959 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.225583 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.225679 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.225734 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.225841 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.225882 4959 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.226852 4959 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227481 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227509 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227516 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227524 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227538 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227548 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227556 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227568 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227577 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227585 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227598 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227606 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.227766 4959 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.228350 4959 server.go:1280] "Started kubelet" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.228538 4959 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.228764 4959 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.229016 4959 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.229627 4959 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 13:08:59 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.230785 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.230822 4959 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.230841 4959 server.go:460] "Adding debug handlers to kubelet server" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.230858 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:42:34.716804836 +0000 UTC Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.231192 4959 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.231285 4959 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.231293 4959 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.231563 4959 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.234202 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="200ms" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.233154 4959 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc0fb4e094bcc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:08:59.228318668 +0000 UTC m=+0.191349211,LastTimestamp:2026-01-21 13:08:59.228318668 +0000 UTC m=+0.191349211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.235084 4959 factory.go:55] Registering systemd factory Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.235128 4959 factory.go:221] Registration of the systemd container factory successfully Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.235699 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.235807 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.239227 4959 factory.go:153] Registering CRI-O factory Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.239294 4959 factory.go:221] Registration of the crio container factory successfully Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.239436 4959 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.239483 4959 factory.go:103] Registering Raw factory Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.239503 4959 manager.go:1196] Started watching for new ooms in manager Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.240411 4959 manager.go:319] Starting recovery of all containers Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247253 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247345 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247358 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247369 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247379 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247394 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247403 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247412 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247424 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247433 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247443 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247453 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247462 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247474 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247482 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247494 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247505 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247514 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247547 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247556 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247589 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247598 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247609 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247620 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247636 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247647 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247661 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247673 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247683 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247699 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247709 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247721 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247731 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247740 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247750 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247761 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247772 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247782 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247809 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247820 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247829 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247840 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247850 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247863 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247873 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247882 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247892 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247903 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247913 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247923 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247936 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247946 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247976 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.247992 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248003 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248019 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248032 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248043 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248055 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248066 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248076 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248085 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248118 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248131 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248143 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248157 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248168 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248179 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248191 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248204 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248215 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248882 4959 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248903 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248914 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248930 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248940 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248949 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248960 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248969 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248980 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.248990 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249001 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249028 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249055 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249070 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249081 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249110 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249120 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249131 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249142 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249156 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249165 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249174 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249184 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249195 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249207 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249220 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249230 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249241 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249252 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249262 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249272 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249283 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249294 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249304 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249330 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249342 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249354 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249369 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249380 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249390 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249405 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249416 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249426 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249437 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249446 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249457 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249466 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249483 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249492 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249503 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249513 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249522 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249533 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249550 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249562 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249572 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249583 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249595 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249606 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249615 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249625 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249634 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249642 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249651 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249661 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249670 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249679 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249689 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249698 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249708 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249717 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249727 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249737 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249746 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249755 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249765 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249775 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249785 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249794 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249803 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249812 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249863 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249881 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249895 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249907 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249919 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249933 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249945 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249956 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249967 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249978 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.249995 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250006 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250019 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250033 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250046 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250058 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250071 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250085 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250110 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250120 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250137 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250148 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250159 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250171 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250180 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250189 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250204 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250220 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250234 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250245 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250260 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250274 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250286 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250303 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250316 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250332 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250344 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250393 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250406 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250419 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250429 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250441 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250453 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250465 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250475 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250488 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250512 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250534 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250548 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250562 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250571 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250582 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250591 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250601 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250610 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250622 4959 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250630 4959 reconstruct.go:97] "Volume reconstruction finished" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.250638 4959 reconciler.go:26] "Reconciler: start to sync state" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.268841 4959 manager.go:324] Recovery completed Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.281074 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.282388 4959 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.283415 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.283473 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.283487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.284779 4959 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.284824 4959 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.284840 4959 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.284860 4959 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.284863 4959 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.284888 4959 state_mem.go:36] "Initialized new in-memory state store" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.284926 4959 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.285956 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.286027 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.293832 4959 policy_none.go:49] "None policy: Start" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.295381 4959 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.295439 4959 state_mem.go:35] "Initializing new in-memory state store" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.332137 4959 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.361309 4959 manager.go:334] "Starting Device Plugin manager" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.361408 4959 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.361423 4959 server.go:79] "Starting device plugin registration server" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.361877 4959 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.361897 4959 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.362166 4959 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.362292 4959 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.362306 4959 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.369329 4959 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.385588 4959 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.385835 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.387109 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.387157 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.387170 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.387373 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.387806 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.387935 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.388408 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.388487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.388504 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.388766 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.388919 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.388968 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.389325 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.389366 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.389379 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.389812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.389855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.389868 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.390130 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.390176 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.390196 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.390557 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.390723 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.390791 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.391941 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.391964 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.391976 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392061 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392090 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392120 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392205 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392242 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392120 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392808 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392834 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392846 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392872 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392890 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.392898 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.393007 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.393026 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.393842 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.393863 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.393871 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.435437 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="400ms" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454356 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454458 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454503 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454526 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454553 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454582 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454623 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454685 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454716 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454745 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454840 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454905 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454935 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454961 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.454987 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.462983 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.464436 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.464475 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.464487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.464522 4959 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.465120 4959 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556801 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556875 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556909 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556932 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556954 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556977 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.556993 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557012 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557036 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557054 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557040 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557129 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557072 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557244 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557248 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557256 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557279 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557273 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557298 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557310 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557189 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557345 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557390 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557452 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557494 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557538 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557557 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557591 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557635 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.557733 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.666153 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.668177 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.668245 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.668258 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.668288 4959 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.668686 4959 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.708419 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.716340 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.730958 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e4870510bc0247989422ca9071ccaf6f1f19d8599e74e212a82a70232acabb89 WatchSource:0}: Error finding container e4870510bc0247989422ca9071ccaf6f1f19d8599e74e212a82a70232acabb89: Status 404 returned error can't find the container with id e4870510bc0247989422ca9071ccaf6f1f19d8599e74e212a82a70232acabb89 Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.733912 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.743854 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4bd66cf98ed0e20876ebc7ee086796f5eff6c0e0a32b77ac088917b2b9272d88 WatchSource:0}: Error finding container 4bd66cf98ed0e20876ebc7ee086796f5eff6c0e0a32b77ac088917b2b9272d88: Status 404 returned error can't find the container with id 4bd66cf98ed0e20876ebc7ee086796f5eff6c0e0a32b77ac088917b2b9272d88 Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.751106 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.754487 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-433e2331ee02740b37dcd294281bdaabb8f4ddfb04c81f8c73a1fc51da8fd419 WatchSource:0}: Error finding container 433e2331ee02740b37dcd294281bdaabb8f4ddfb04c81f8c73a1fc51da8fd419: Status 404 returned error can't find the container with id 433e2331ee02740b37dcd294281bdaabb8f4ddfb04c81f8c73a1fc51da8fd419 Jan 21 13:08:59 crc kubenswrapper[4959]: I0121 13:08:59.758570 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.767079 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-30731ff90e017bd905598201029a17c73dcfd42c9f85eed64ed370443f117e38 WatchSource:0}: Error finding container 30731ff90e017bd905598201029a17c73dcfd42c9f85eed64ed370443f117e38: Status 404 returned error can't find the container with id 30731ff90e017bd905598201029a17c73dcfd42c9f85eed64ed370443f117e38 Jan 21 13:08:59 crc kubenswrapper[4959]: W0121 13:08:59.782634 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2c0e6256afbd17785018b526e5a76633d8c3c5a76e479d0a8eb7bab4750e956c WatchSource:0}: Error finding container 2c0e6256afbd17785018b526e5a76633d8c3c5a76e479d0a8eb7bab4750e956c: Status 404 returned error can't find the container with id 2c0e6256afbd17785018b526e5a76633d8c3c5a76e479d0a8eb7bab4750e956c Jan 21 13:08:59 crc kubenswrapper[4959]: E0121 13:08:59.836545 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="800ms" Jan 21 13:09:00 crc kubenswrapper[4959]: W0121 13:09:00.068507 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:09:00 crc kubenswrapper[4959]: E0121 13:09:00.068596 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.069531 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.071577 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.071610 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.071624 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.071652 4959 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:09:00 crc kubenswrapper[4959]: E0121 13:09:00.072234 4959 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.220:6443: connect: connection refused" node="crc" Jan 21 13:09:00 crc kubenswrapper[4959]: W0121 13:09:00.127655 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:09:00 crc kubenswrapper[4959]: E0121 13:09:00.128295 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.230305 4959 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.231169 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:53:07.434605128 +0000 UTC Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.291891 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.292047 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"433e2331ee02740b37dcd294281bdaabb8f4ddfb04c81f8c73a1fc51da8fd419"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.294980 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8" exitCode=0 Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.295058 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.295079 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4bd66cf98ed0e20876ebc7ee086796f5eff6c0e0a32b77ac088917b2b9272d88"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.295236 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.297833 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.297974 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.297996 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.298457 4959 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f" exitCode=0 Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.298549 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.298588 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4870510bc0247989422ca9071ccaf6f1f19d8599e74e212a82a70232acabb89"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.298882 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300464 4959 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b" exitCode=0 Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300539 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300594 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2c0e6256afbd17785018b526e5a76633d8c3c5a76e479d0a8eb7bab4750e956c"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300690 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300723 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300691 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.300735 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.301004 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.301779 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.301812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.301825 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.303204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.303265 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.303293 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.304702 4959 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d" exitCode=0 Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.304761 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.304800 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"30731ff90e017bd905598201029a17c73dcfd42c9f85eed64ed370443f117e38"} Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.304899 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.306573 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.306617 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.306634 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: W0121 13:09:00.484310 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:09:00 crc kubenswrapper[4959]: E0121 13:09:00.484454 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:09:00 crc kubenswrapper[4959]: E0121 13:09:00.637757 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="1.6s" Jan 21 13:09:00 crc kubenswrapper[4959]: W0121 13:09:00.760514 4959 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.220:6443: connect: connection refused Jan 21 13:09:00 crc kubenswrapper[4959]: E0121 13:09:00.760734 4959 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.220:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.873777 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.878138 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.878184 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.878197 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:00 crc kubenswrapper[4959]: I0121 13:09:00.878230 4959 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.184336 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.231789 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:58:47.863811231 +0000 UTC Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.310442 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.310498 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.310514 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.310526 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.313183 4959 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4" exitCode=0 Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.313245 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.313437 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.314428 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.314472 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.314487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.315472 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9f5d1bfcfe736816e9159afa54c9c980d283c7103298d6950005c9e50e8840f3"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.315557 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.316359 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.316380 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.316389 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.318698 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.318735 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.318747 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.318808 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.326657 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.326728 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.326748 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.330924 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.330989 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.331003 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6"} Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.331022 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.332266 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.332302 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:01 crc kubenswrapper[4959]: I0121 13:09:01.332314 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.232011 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:41:10.459883002 +0000 UTC Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.336853 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f"} Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.336915 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.338083 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.338305 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.338414 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.339407 4959 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf" exitCode=0 Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.339475 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.339528 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf"} Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.339602 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.339687 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.339799 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340211 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340229 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340237 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340746 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340763 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340824 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340857 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:02 crc kubenswrapper[4959]: I0121 13:09:02.340868 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.232462 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:14:28.844585684 +0000 UTC Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.346770 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47"} Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.346878 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b"} Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.346900 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2"} Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.346810 4959 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.346981 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.346988 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.348369 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.348404 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.348453 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.348466 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.348418 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:03 crc kubenswrapper[4959]: I0121 13:09:03.348522 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.002918 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.003192 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.004516 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.004553 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.004593 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.168434 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.232611 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:41:15.473186172 +0000 UTC Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.353142 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc"} Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.353170 4959 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.353186 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5"} Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.353200 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.353212 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.354321 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.354392 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.354400 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.354432 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.354412 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.354444 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.448191 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:04 crc kubenswrapper[4959]: I0121 13:09:04.868973 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.233569 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:46:54.711510714 +0000 UTC Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.356320 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.356322 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.357648 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.357677 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.357687 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.358152 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.358241 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.358298 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.679840 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.680069 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.681683 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.681768 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.681787 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:05 crc kubenswrapper[4959]: I0121 13:09:05.688257 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.234183 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:24:41.406785906 +0000 UTC Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.359139 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.359139 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.360143 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.360272 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.360303 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.360345 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.360361 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.360370 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:06 crc kubenswrapper[4959]: I0121 13:09:06.978656 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.234660 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:34:27.680511128 +0000 UTC Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.361958 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.363419 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.363488 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.363512 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.468710 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.468928 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.470292 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.470330 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:07 crc kubenswrapper[4959]: I0121 13:09:07.470346 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:08 crc kubenswrapper[4959]: I0121 13:09:08.235634 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:57:56.310452721 +0000 UTC Jan 21 13:09:08 crc kubenswrapper[4959]: I0121 13:09:08.858641 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 13:09:08 crc kubenswrapper[4959]: I0121 13:09:08.858899 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:08 crc kubenswrapper[4959]: I0121 13:09:08.860710 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:08 crc kubenswrapper[4959]: I0121 13:09:08.860756 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:08 crc kubenswrapper[4959]: I0121 13:09:08.860769 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.227479 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.227814 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.229302 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.229352 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.229365 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.236122 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:46:52.045362683 +0000 UTC Jan 21 13:09:09 crc kubenswrapper[4959]: E0121 13:09:09.369843 4959 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.979036 4959 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:09:09 crc kubenswrapper[4959]: I0121 13:09:09.979548 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:09:10 crc kubenswrapper[4959]: I0121 13:09:10.236467 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:29:19.495035088 +0000 UTC Jan 21 13:09:10 crc kubenswrapper[4959]: E0121 13:09:10.879772 4959 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 13:09:11 crc kubenswrapper[4959]: E0121 13:09:11.186331 4959 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 13:09:11 crc kubenswrapper[4959]: I0121 13:09:11.230306 4959 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 13:09:11 crc kubenswrapper[4959]: I0121 13:09:11.236735 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:01:35.088251765 +0000 UTC Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.183444 4959 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.183545 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.208540 4959 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.208641 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.237359 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:19:30.807376492 +0000 UTC Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.480156 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.482318 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.482383 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.482408 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:12 crc kubenswrapper[4959]: I0121 13:09:12.482451 4959 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:09:13 crc kubenswrapper[4959]: I0121 13:09:13.238422 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:47:41.148435237 +0000 UTC Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.006800 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.006968 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.008245 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.008280 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.008292 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.238561 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:55:45.132648412 +0000 UTC Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.454177 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.454457 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.456003 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.456062 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.456082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:14 crc kubenswrapper[4959]: I0121 13:09:14.459198 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.239407 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:40:06.07866554 +0000 UTC Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.350297 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.362286 4959 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.381426 4959 csr.go:261] certificate signing request csr-grmzt is approved, waiting to be issued Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.388329 4959 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.389320 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.389361 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.389378 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:15 crc kubenswrapper[4959]: I0121 13:09:15.389646 4959 csr.go:257] certificate signing request csr-grmzt is issued Jan 21 13:09:16 crc kubenswrapper[4959]: I0121 13:09:16.240260 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:03:22.58145831 +0000 UTC Jan 21 13:09:16 crc kubenswrapper[4959]: I0121 13:09:16.391206 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 13:04:15 +0000 UTC, rotation deadline is 2026-10-31 23:16:04.29092203 +0000 UTC Jan 21 13:09:16 crc kubenswrapper[4959]: I0121 13:09:16.391624 4959 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6802h6m47.899304925s for next certificate rotation Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.143267 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.146011 4959 trace.go:236] Trace[1707248702]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:09:02.628) (total time: 14517ms): Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[1707248702]: ---"Objects listed" error: 14516ms (13:09:17.145) Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[1707248702]: [14.517010813s] [14.517010813s] END Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.146044 4959 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.146072 4959 trace.go:236] Trace[1205352843]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:09:02.525) (total time: 14620ms): Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[1205352843]: ---"Objects listed" error: 14620ms (13:09:17.145) Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[1205352843]: [14.620811557s] [14.620811557s] END Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.146122 4959 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.146807 4959 trace.go:236] Trace[529279454]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:09:02.535) (total time: 14611ms): Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[529279454]: ---"Objects listed" error: 14611ms (13:09:17.146) Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[529279454]: [14.611611856s] [14.611611856s] END Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.146830 4959 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.147562 4959 trace.go:236] Trace[602266929]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:09:02.819) (total time: 14328ms): Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[602266929]: ---"Objects listed" error: 14327ms (13:09:17.147) Jan 21 13:09:17 crc kubenswrapper[4959]: Trace[602266929]: [14.328137896s] [14.328137896s] END Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.147594 4959 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.147604 4959 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.232655 4959 apiserver.go:52] "Watching apiserver" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.236769 4959 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.237086 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.237670 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.237763 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.237815 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.237839 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.237850 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.237885 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.237962 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.238411 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.238461 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.240785 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:44:33.316564488 +0000 UTC Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.241338 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.241602 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.241779 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.241875 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.242585 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.242622 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.243134 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.243369 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.243471 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.272180 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.275067 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.288026 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.295624 4959 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37440->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.295801 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37440->192.168.126.11:17697: read: connection reset by peer" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.295959 4959 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37428->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.296032 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37428->192.168.126.11:17697: read: connection reset by peer" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.296325 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.296353 4959 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.296422 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.297193 4959 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.297283 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.303778 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.315760 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.326459 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.332042 4959 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.336117 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.348576 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.348893 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.349963 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.350267 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.350651 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351170 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351368 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351495 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351801 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.352065 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.352799 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.353130 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.353458 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.361876 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.361939 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.361971 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362000 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.354853 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362041 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362326 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362355 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362375 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362390 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362407 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362425 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362445 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362461 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362480 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362498 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362516 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362540 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362558 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.349058 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.349903 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.350221 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.350601 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351124 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351674 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351700 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.351755 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.352018 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.352749 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.353067 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.353410 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362394 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362471 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362571 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362578 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362818 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362850 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362872 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362895 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362897 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362920 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362942 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362962 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.362991 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363010 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363029 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363023 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363051 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363136 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363155 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363175 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363197 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363217 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363236 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363258 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363260 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363276 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363297 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363316 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363333 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363354 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363372 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363390 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363410 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363427 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363444 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363464 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363480 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363496 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363513 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363533 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363550 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363568 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363589 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363606 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363623 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363641 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363658 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363676 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363693 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363694 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363851 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364000 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364078 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364164 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364235 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364287 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364323 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364669 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364831 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.364911 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.365070 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.365163 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.365395 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.365455 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.365774 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.366039 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.366106 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.366231 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.366411 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.366685 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.367597 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.367636 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368170 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.363697 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368317 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368344 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368373 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368396 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368421 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368449 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368476 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368500 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368526 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368552 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368584 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368610 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368630 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368651 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368671 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368692 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368709 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368729 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368748 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368784 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368802 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368819 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368835 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368850 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368865 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368893 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368909 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368931 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368947 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368965 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368980 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.368999 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369015 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369031 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369050 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369068 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369084 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369128 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369145 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369164 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369184 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369290 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369309 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369324 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369340 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369355 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369385 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369400 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369416 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369434 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369454 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369503 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369523 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369541 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369567 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369588 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369608 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369629 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369649 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369672 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369690 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369710 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369732 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369752 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369770 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369790 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369806 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369825 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369842 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369860 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369878 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369895 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369913 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369937 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369955 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369973 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.369991 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370010 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370028 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370047 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370065 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370084 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370116 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370134 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370152 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370169 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370187 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370206 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370224 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370245 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370264 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370282 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370300 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370320 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370339 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370357 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370375 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370395 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370414 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370431 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370453 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370470 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370488 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370508 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370526 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370555 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370573 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370592 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370610 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370627 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370646 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370664 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370685 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370703 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370721 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370738 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370754 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370772 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370790 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370809 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370827 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370882 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370909 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370929 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370952 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370973 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.370995 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.371015 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.371039 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.371062 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.371084 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.382689 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.382742 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.382767 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.382800 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383412 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383438 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383453 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383472 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383489 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383502 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383516 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383531 4959 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383545 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383558 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383571 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383586 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383599 4959 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383611 4959 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383622 4959 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383634 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383646 4959 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383658 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384535 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384570 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384583 4959 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384596 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384609 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384620 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384632 4959 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384643 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384653 4959 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384665 4959 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384679 4959 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384691 4959 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384701 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384713 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384723 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384733 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384743 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384752 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384763 4959 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384773 4959 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384782 4959 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384792 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384801 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384811 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.384821 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.401237 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.374265 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.375041 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.376014 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.380117 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.380867 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.381130 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.382113 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.382088 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383451 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.383477 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.387170 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.387455 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.387668 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:09:17.887643634 +0000 UTC m=+18.850674177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.408952 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.409028 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:17.909006043 +0000 UTC m=+18.872036586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.409394 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.409703 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.409932 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.410042 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.410107 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.411266 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.411303 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.411363 4959 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.411620 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.411894 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.387894 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.388479 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.388797 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.388856 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.390360 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.393609 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.394009 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.396712 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.396984 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.397570 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.407577 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.406778 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.407432 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.408171 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.408210 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.408729 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.413245 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.413327 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.414046 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.414162 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.414212 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.414836 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.415200 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.415605 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.415648 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.415712 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.415929 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.417260 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.417721 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.417764 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420255 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420263 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420268 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420493 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420568 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420675 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.420868 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.421255 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.421401 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.421786 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.421979 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.422020 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.422172 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.422234 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.422424 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.422642 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.422729 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423059 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423090 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423368 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423496 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423442 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423648 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423672 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423721 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.423911 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.424005 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.424184 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.424326 4959 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.424486 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.424980 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.425985 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.426058 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.426452 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.426708 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:17.926671449 +0000 UTC m=+18.889702162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.426706 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.426988 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.427182 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.427370 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.427609 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.427939 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.428076 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.428088 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.428791 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.429330 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.429386 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.429678 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.430137 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.429779 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.430820 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.432925 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.435798 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.435849 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.436251 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.436470 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.436672 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.436738 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.436895 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.437013 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.437362 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.437442 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.437527 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.437824 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.438901 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.439388 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.440029 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.440600 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.440740 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.440805 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.443552 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.448393 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.448858 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.449044 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.449650 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.450372 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.450924 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.461621 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.461679 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.462082 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.462132 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.462132 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.462145 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.462288 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:17.962264536 +0000 UTC m=+18.925295250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.462461 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.463718 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.463811 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.464404 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.464442 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.464464 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.464546 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:17.964515633 +0000 UTC m=+18.927546366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.466930 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.467215 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.467390 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.467790 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.468529 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.468917 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.469390 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.469518 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.469691 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.474282 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.477253 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.481278 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gx5vl"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.481617 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wwkrl"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.481894 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.489743 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.490125 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-w5zw9"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.490401 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.490561 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.490658 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.490766 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.492929 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.495238 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.496630 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.497246 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.497330 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.506530 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.507411 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.507617 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.507700 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.507832 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508058 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508295 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tqwdg"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508338 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508397 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508459 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508485 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508531 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508771 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508814 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.508819 4959 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes/kubernetes.io~secret/package-server-manager-serving-cert Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508853 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508963 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508984 4959 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.508998 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509012 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509027 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509040 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509052 4959 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509065 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509080 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.509743 4959 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509783 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.509828 4959 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.509861 4959 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509879 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509880 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509937 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.509964 4959 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.509975 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.510089 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.510290 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511410 4959 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511453 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511468 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511481 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511495 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511509 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511523 4959 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511537 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511550 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511562 4959 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511574 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511586 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511598 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511624 4959 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511640 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511654 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511667 4959 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511680 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511697 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511711 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511723 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511735 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511747 4959 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511761 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511773 4959 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511785 4959 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511797 4959 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511808 4959 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511820 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511833 4959 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511847 4959 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511860 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511871 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511883 4959 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511895 4959 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511907 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511919 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511934 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511948 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511961 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511973 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511985 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.511996 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512008 4959 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512020 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512032 4959 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512044 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512059 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512071 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512260 4959 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512277 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512292 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512304 4959 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512316 4959 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512328 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512357 4959 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512368 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512381 4959 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512396 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512409 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512422 4959 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512434 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512447 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512461 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512474 4959 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512486 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512498 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512511 4959 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512522 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512533 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512545 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512556 4959 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512568 4959 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512580 4959 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512591 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512603 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512616 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512633 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512646 4959 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512658 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512669 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512681 4959 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512693 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512697 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.512706 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513061 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513075 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513086 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513116 4959 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513125 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513135 4959 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513144 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513152 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513161 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513169 4959 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513178 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513188 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513197 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513206 4959 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513216 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513226 4959 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513235 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513244 4959 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513252 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513261 4959 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513269 4959 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513278 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513286 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513295 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513306 4959 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513315 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513323 4959 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513334 4959 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513344 4959 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513353 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513362 4959 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513374 4959 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513385 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513398 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513410 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513425 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513440 4959 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513451 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513463 4959 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513473 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513485 4959 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513495 4959 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513507 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513517 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513526 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.513918 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.514666 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.516439 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.516673 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.516941 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.517353 4959 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.517429 4959 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.518083 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.518462 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.518563 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.518200 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.518788 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.520716 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521130 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521180 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521195 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521288 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521233 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.521321 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.524318 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.526248 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.526603 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.527494 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.527888 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.528362 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.530027 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.532780 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.534923 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.540611 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.544606 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.544660 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.544670 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.544685 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.544695 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.549700 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.550275 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.559131 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.559060 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.566669 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.566716 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.566754 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.566763 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.566785 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.566799 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.569152 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.584157 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.584221 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.587987 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-03dfdffddaee23e5fca48fa805319aae0ac04bf9b6363b6a6a1f70141b3a0487 WatchSource:0}: Error finding container 03dfdffddaee23e5fca48fa805319aae0ac04bf9b6363b6a6a1f70141b3a0487: Status 404 returned error can't find the container with id 03dfdffddaee23e5fca48fa805319aae0ac04bf9b6363b6a6a1f70141b3a0487 Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.591476 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.591522 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.591534 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.591556 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.591570 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.591442 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-03a908ce3ab395c66e9edd2fe3f0c4faf8e5c457d8e93c09e91b43636802eb4b WatchSource:0}: Error finding container 03a908ce3ab395c66e9edd2fe3f0c4faf8e5c457d8e93c09e91b43636802eb4b: Status 404 returned error can't find the container with id 03a908ce3ab395c66e9edd2fe3f0c4faf8e5c457d8e93c09e91b43636802eb4b Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.596744 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.606474 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.609486 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.610797 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.610859 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.610873 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.610893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.610905 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614182 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpg6h\" (UniqueName: \"kubernetes.io/projected/867d68b2-3803-46b0-b974-62ec7ee89b49-kube-api-access-dpg6h\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614249 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-os-release\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614286 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-socket-dir-parent\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614413 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-etc-kubernetes\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614449 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/342f1ad8-984e-41bd-acca-edad9366e45d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614669 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-cni-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614746 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-cnibin\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614784 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-conf-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.614863 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstm5\" (UniqueName: \"kubernetes.io/projected/342f1ad8-984e-41bd-acca-edad9366e45d-kube-api-access-mstm5\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615018 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-hostroot\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615051 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-system-cni-dir\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615077 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-os-release\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615114 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-netns\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615144 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/342f1ad8-984e-41bd-acca-edad9366e45d-cni-binary-copy\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615211 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d99d89-7cdc-410d-b2f3-347be806f79a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615239 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f4768408-f881-4a09-9857-2e7580a4b1c2-hosts-file\") pod \"node-resolver-gx5vl\" (UID: \"f4768408-f881-4a09-9857-2e7580a4b1c2\") " pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615266 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615337 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00d99d89-7cdc-410d-b2f3-347be806f79a-proxy-tls\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615362 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/867d68b2-3803-46b0-b974-62ec7ee89b49-cni-binary-copy\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615389 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-multus-certs\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615414 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-k8s-cni-cncf-io\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615439 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-cni-bin\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615472 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-cni-multus\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615502 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00d99d89-7cdc-410d-b2f3-347be806f79a-rootfs\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615572 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmklh\" (UniqueName: \"kubernetes.io/projected/00d99d89-7cdc-410d-b2f3-347be806f79a-kube-api-access-cmklh\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615602 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99r4n\" (UniqueName: \"kubernetes.io/projected/f4768408-f881-4a09-9857-2e7580a4b1c2-kube-api-access-99r4n\") pod \"node-resolver-gx5vl\" (UID: \"f4768408-f881-4a09-9857-2e7580a4b1c2\") " pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615627 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-system-cni-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615654 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-cnibin\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615677 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-kubelet\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615701 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-daemon-config\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615777 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615797 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615812 4959 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615826 4959 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615838 4959 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615853 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615866 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615879 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615893 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615909 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615922 4959 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615939 4959 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.615963 4959 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.619563 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.620692 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.620799 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.624082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.624138 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.624151 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.624167 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.624196 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.630540 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.643583 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.654951 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.665605 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.676398 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.687107 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.696940 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.708426 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717273 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00d99d89-7cdc-410d-b2f3-347be806f79a-proxy-tls\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717335 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/867d68b2-3803-46b0-b974-62ec7ee89b49-cni-binary-copy\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717380 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-multus-certs\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717410 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f4768408-f881-4a09-9857-2e7580a4b1c2-hosts-file\") pod \"node-resolver-gx5vl\" (UID: \"f4768408-f881-4a09-9857-2e7580a4b1c2\") " pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717432 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717453 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-k8s-cni-cncf-io\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717473 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-cni-bin\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717491 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-cni-multus\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717508 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00d99d89-7cdc-410d-b2f3-347be806f79a-rootfs\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717526 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmklh\" (UniqueName: \"kubernetes.io/projected/00d99d89-7cdc-410d-b2f3-347be806f79a-kube-api-access-cmklh\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717546 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-system-cni-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717597 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-cnibin\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717613 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-kubelet\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717630 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-daemon-config\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717645 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99r4n\" (UniqueName: \"kubernetes.io/projected/f4768408-f881-4a09-9857-2e7580a4b1c2-kube-api-access-99r4n\") pod \"node-resolver-gx5vl\" (UID: \"f4768408-f881-4a09-9857-2e7580a4b1c2\") " pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717664 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-socket-dir-parent\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717680 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpg6h\" (UniqueName: \"kubernetes.io/projected/867d68b2-3803-46b0-b974-62ec7ee89b49-kube-api-access-dpg6h\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717695 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-os-release\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717710 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-etc-kubernetes\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717690 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-k8s-cni-cncf-io\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717784 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f4768408-f881-4a09-9857-2e7580a4b1c2-hosts-file\") pod \"node-resolver-gx5vl\" (UID: \"f4768408-f881-4a09-9857-2e7580a4b1c2\") " pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717840 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-multus-certs\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717726 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/342f1ad8-984e-41bd-acca-edad9366e45d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717869 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-kubelet\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717901 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00d99d89-7cdc-410d-b2f3-347be806f79a-rootfs\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717906 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-socket-dir-parent\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717930 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-cni-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.717960 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-cnibin\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-conf-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718078 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-cnibin\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718115 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mstm5\" (UniqueName: \"kubernetes.io/projected/342f1ad8-984e-41bd-acca-edad9366e45d-kube-api-access-mstm5\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718156 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-cni-bin\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718208 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-hostroot\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718156 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-cni-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718177 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-system-cni-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718250 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-os-release\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718206 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-conf-dir\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718178 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-hostroot\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718197 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-cnibin\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718371 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-etc-kubernetes\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718411 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-system-cni-dir\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718482 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-system-cni-dir\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718529 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-os-release\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718574 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-netns\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718647 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-os-release\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718694 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/342f1ad8-984e-41bd-acca-edad9366e45d-cni-binary-copy\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718709 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-run-netns\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718869 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d99d89-7cdc-410d-b2f3-347be806f79a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.718944 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/867d68b2-3803-46b0-b974-62ec7ee89b49-host-var-lib-cni-multus\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.719407 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/867d68b2-3803-46b0-b974-62ec7ee89b49-cni-binary-copy\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.719608 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d99d89-7cdc-410d-b2f3-347be806f79a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.719753 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/342f1ad8-984e-41bd-acca-edad9366e45d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.720226 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/867d68b2-3803-46b0-b974-62ec7ee89b49-multus-daemon-config\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.720408 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/342f1ad8-984e-41bd-acca-edad9366e45d-cni-binary-copy\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.721352 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00d99d89-7cdc-410d-b2f3-347be806f79a-proxy-tls\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.722493 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.727080 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.727151 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.727163 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.727181 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.727208 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.734816 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.736517 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpg6h\" (UniqueName: \"kubernetes.io/projected/867d68b2-3803-46b0-b974-62ec7ee89b49-kube-api-access-dpg6h\") pod \"multus-w5zw9\" (UID: \"867d68b2-3803-46b0-b974-62ec7ee89b49\") " pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.738127 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmklh\" (UniqueName: \"kubernetes.io/projected/00d99d89-7cdc-410d-b2f3-347be806f79a-kube-api-access-cmklh\") pod \"machine-config-daemon-wwkrl\" (UID: \"00d99d89-7cdc-410d-b2f3-347be806f79a\") " pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.739790 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/342f1ad8-984e-41bd-acca-edad9366e45d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.743228 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99r4n\" (UniqueName: \"kubernetes.io/projected/f4768408-f881-4a09-9857-2e7580a4b1c2-kube-api-access-99r4n\") pod \"node-resolver-gx5vl\" (UID: \"f4768408-f881-4a09-9857-2e7580a4b1c2\") " pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.750966 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstm5\" (UniqueName: \"kubernetes.io/projected/342f1ad8-984e-41bd-acca-edad9366e45d-kube-api-access-mstm5\") pod \"multus-additional-cni-plugins-tqwdg\" (UID: \"342f1ad8-984e-41bd-acca-edad9366e45d\") " pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.752196 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.763845 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.778631 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.788500 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.830055 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.843799 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.843883 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.843895 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.843921 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.843932 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.844082 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w5zw9" Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.849086 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d99d89_7cdc_410d_b2f3_347be806f79a.slice/crio-f3f7eb715550454775a4da9bc2df2c43ed058e368949d90b2d10f17d501e379a WatchSource:0}: Error finding container f3f7eb715550454775a4da9bc2df2c43ed058e368949d90b2d10f17d501e379a: Status 404 returned error can't find the container with id f3f7eb715550454775a4da9bc2df2c43ed058e368949d90b2d10f17d501e379a Jan 21 13:09:17 crc kubenswrapper[4959]: W0121 13:09:17.856389 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod867d68b2_3803_46b0_b974_62ec7ee89b49.slice/crio-d13d7241e9959cc788faaf971f3b6625f8639f497938b95a37bdf7e882db2a4c WatchSource:0}: Error finding container d13d7241e9959cc788faaf971f3b6625f8639f497938b95a37bdf7e882db2a4c: Status 404 returned error can't find the container with id d13d7241e9959cc788faaf971f3b6625f8639f497938b95a37bdf7e882db2a4c Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.856911 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gx5vl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.861978 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-26tbg"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.862419 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.864596 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.864928 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.865402 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.865972 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.874224 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x7k8s"] Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.875406 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.875513 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878187 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878225 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878187 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878393 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878651 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878659 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.878957 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.891526 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.896758 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.908569 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.920614 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.920759 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.920851 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.920903 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:18.920888018 +0000 UTC m=+19.883918561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:17 crc kubenswrapper[4959]: E0121 13:09:17.920996 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:09:18.92096845 +0000 UTC m=+19.883998993 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.925357 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.940310 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.951269 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.951309 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.951318 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.951337 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.951349 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:17Z","lastTransitionTime":"2026-01-21T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.956754 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.972111 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.987431 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:17 crc kubenswrapper[4959]: I0121 13:09:17.998185 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.009603 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.020302 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022638 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-env-overrides\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022688 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-netd\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022751 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-etc-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022779 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022830 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-slash\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022850 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-netns\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.022869 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-serviceca\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023117 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovn-node-metrics-cert\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023148 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-var-lib-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023169 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvkhb\" (UniqueName: \"kubernetes.io/projected/eea635fd-8d4a-4b77-bb58-3d778f59c79e-kube-api-access-cvkhb\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023200 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023223 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-systemd-units\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023244 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-bin\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023267 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.023554 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.023659 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:19.02363775 +0000 UTC m=+19.986668293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.023704 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.023717 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.023727 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.023767 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:19.023748373 +0000 UTC m=+19.986778916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.023287 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-script-lib\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024110 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-host\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024132 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024161 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024191 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-ovn-kubernetes\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024226 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72h8\" (UniqueName: \"kubernetes.io/projected/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-kube-api-access-x72h8\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024252 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-log-socket\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024273 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-config\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024297 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-kubelet\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024313 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-node-log\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024333 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-systemd\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.024351 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-ovn\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.024567 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.024595 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.024609 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.025555 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:19.025539048 +0000 UTC m=+19.988569591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.029577 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.042616 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.055789 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.055853 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.055869 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.055895 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.055911 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.074135 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.121899 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.125957 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-kubelet\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.125988 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-node-log\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126007 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-ovn\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126022 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-systemd\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126039 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-env-overrides\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126059 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-etc-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126074 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-netd\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126106 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126088 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-kubelet\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126128 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-slash\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126146 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-netns\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126196 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-ovn\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126202 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-node-log\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126226 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-systemd\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126261 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-serviceca\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126282 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126298 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-slash\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126302 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-etc-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126320 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-netd\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126373 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-netns\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126432 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovn-node-metrics-cert\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.126961 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-env-overrides\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127139 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-var-lib-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127295 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-var-lib-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127336 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-systemd-units\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127367 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-bin\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127388 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvkhb\" (UniqueName: \"kubernetes.io/projected/eea635fd-8d4a-4b77-bb58-3d778f59c79e-kube-api-access-cvkhb\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127422 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-script-lib\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127424 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-systemd-units\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127440 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-host\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127492 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127517 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-ovn-kubernetes\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127535 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x72h8\" (UniqueName: \"kubernetes.io/projected/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-kube-api-access-x72h8\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127564 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-log-socket\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127621 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-config\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127721 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-serviceca\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127790 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-host\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.127821 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-bin\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.128218 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-openvswitch\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.128270 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-ovn-kubernetes\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.128308 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-config\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.128353 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-log-socket\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.128862 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-script-lib\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.132648 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovn-node-metrics-cert\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.152445 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.161034 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.161069 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.161082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.161283 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.161299 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.187014 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvkhb\" (UniqueName: \"kubernetes.io/projected/eea635fd-8d4a-4b77-bb58-3d778f59c79e-kube-api-access-cvkhb\") pod \"ovnkube-node-x7k8s\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.201325 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.204740 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x72h8\" (UniqueName: \"kubernetes.io/projected/8e27abf2-1c58-4c8e-9f92-d3323ee8d397-kube-api-access-x72h8\") pod \"node-ca-26tbg\" (UID: \"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\") " pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: W0121 13:09:18.216780 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeea635fd_8d4a_4b77_bb58_3d778f59c79e.slice/crio-97a2140e81393fe7364cb079817a44c98b2380df395e4670f8fdbb68a8936bae WatchSource:0}: Error finding container 97a2140e81393fe7364cb079817a44c98b2380df395e4670f8fdbb68a8936bae: Status 404 returned error can't find the container with id 97a2140e81393fe7364cb079817a44c98b2380df395e4670f8fdbb68a8936bae Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.241797 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:38:36.210714629 +0000 UTC Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.242081 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.264221 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.264277 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.264289 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.264316 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.264331 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.285219 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.285373 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.286328 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.317571 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.364147 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.367077 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.367152 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.367166 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.367193 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.367208 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.399381 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.417614 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.417668 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.417682 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"f3f7eb715550454775a4da9bc2df2c43ed058e368949d90b2d10f17d501e379a"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.419555 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" exitCode=0 Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.419611 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.419643 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"97a2140e81393fe7364cb079817a44c98b2380df395e4670f8fdbb68a8936bae"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.421556 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"03dfdffddaee23e5fca48fa805319aae0ac04bf9b6363b6a6a1f70141b3a0487"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.424453 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.424516 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.424528 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ee3b9beeaaffc8ceca0d415cd1f29b2db9e194e838a61bd181ccb172518a9ddf"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.427939 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerStarted","Data":"7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.427977 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerStarted","Data":"d13d7241e9959cc788faaf971f3b6625f8639f497938b95a37bdf7e882db2a4c"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.429529 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.429569 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"03a908ce3ab395c66e9edd2fe3f0c4faf8e5c457d8e93c09e91b43636802eb4b"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.430804 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gx5vl" event={"ID":"f4768408-f881-4a09-9857-2e7580a4b1c2","Type":"ContainerStarted","Data":"e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.430859 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gx5vl" event={"ID":"f4768408-f881-4a09-9857-2e7580a4b1c2","Type":"ContainerStarted","Data":"7d6d3c32eccf779205fd4791af1ac8d6c78291b74489bf09a963c78592d973a3"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.432663 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.433174 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerStarted","Data":"25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.433207 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerStarted","Data":"a73bf72dfb6d07edba7efd0497236d84d7c095de147c29637b769f01c96536de"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.435123 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.436498 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f" exitCode=255 Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.437057 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.474692 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.474720 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.474728 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.474742 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.474751 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.483636 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26tbg" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.497615 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.497795 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.497898 4959 scope.go:117] "RemoveContainer" containerID="337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.552587 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.579301 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.579349 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.579364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.579425 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.579439 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.604440 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.623226 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.661300 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.683731 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.683782 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.683793 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.683812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.683823 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.696660 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.738543 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.789210 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.789261 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.789273 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.789292 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.789305 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.797152 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.822626 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.853618 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.892443 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.892520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.892532 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.892558 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.892572 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.895131 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.937638 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.937958 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:09:20.937913507 +0000 UTC m=+21.900944060 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.938256 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.938373 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: E0121 13:09:18.938453 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:20.93844095 +0000 UTC m=+21.901471503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.939482 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.979228 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:18Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.995996 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.996086 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.996129 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.996155 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:18 crc kubenswrapper[4959]: I0121 13:09:18.996171 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:18Z","lastTransitionTime":"2026-01-21T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.013454 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.039802 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.039847 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.039869 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.039981 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040020 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040038 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040042 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040078 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040105 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:21.040065724 +0000 UTC m=+22.003096267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040170 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040050 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040237 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:21.040217438 +0000 UTC m=+22.003247981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.040316 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:21.0403039 +0000 UTC m=+22.003334443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.055322 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.099458 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.099949 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.099962 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.099984 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.099934 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.099996 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.101577 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.118146 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.144180 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.160435 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.168594 4959 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.197439 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.203286 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.203319 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.203329 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.203345 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.203357 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.242258 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:04:48.423813189 +0000 UTC Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.243948 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.275843 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.285573 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.285912 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.285646 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:19 crc kubenswrapper[4959]: E0121 13:09:19.286221 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.290475 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.291718 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.293652 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.294849 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.296556 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.297684 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.301706 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.302701 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.305590 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.305790 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.305849 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.305862 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.305881 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.305898 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.306562 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.308061 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.309198 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.311286 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.312250 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.313240 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.315635 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.316571 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.316730 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.318358 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.319349 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.320401 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.321782 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.322449 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.322949 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.324167 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.324713 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.326018 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.326912 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.327916 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.328551 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.329419 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.329877 4959 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.329979 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.331950 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.332579 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.333000 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.334692 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.335824 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.336383 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.337388 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.338024 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.338901 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.339498 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.340805 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.341820 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.342286 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.342803 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.343705 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.344841 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.345377 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.345881 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.346725 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.347342 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.350581 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.351301 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.359058 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.404392 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.413469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.413929 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.413946 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.413967 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.413983 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.435732 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.446473 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.446552 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.471958 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.474541 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.474969 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.476613 4959 generic.go:334] "Generic (PLEG): container finished" podID="342f1ad8-984e-41bd-acca-edad9366e45d" containerID="25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a" exitCode=0 Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.476659 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerDied","Data":"25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.480472 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26tbg" event={"ID":"8e27abf2-1c58-4c8e-9f92-d3323ee8d397","Type":"ContainerStarted","Data":"5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.480498 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26tbg" event={"ID":"8e27abf2-1c58-4c8e-9f92-d3323ee8d397","Type":"ContainerStarted","Data":"c3f167d4bf6b652a513082fa39f084dad9ecffd307e9a8aaf016ef508a863c10"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.518900 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.518959 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.518974 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.518995 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.519010 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.533078 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.560860 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.584045 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.609872 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.624660 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.624696 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.624706 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.624727 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.624738 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.633488 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.674271 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.716208 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.730671 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.730721 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.730734 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.730752 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.730763 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.758584 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.801508 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.840908 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.840954 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.840962 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.840982 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.840994 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.845106 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.875004 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.913987 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.943848 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.943904 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.943918 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.943943 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.943960 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:19Z","lastTransitionTime":"2026-01-21T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.956507 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:19 crc kubenswrapper[4959]: I0121 13:09:19.995752 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.033062 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.046910 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.047282 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.047392 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.047478 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.047563 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.075588 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.111985 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.151819 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.151884 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.151898 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.151919 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.151935 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.160180 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.194462 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.237500 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.242741 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:57:58.759514976 +0000 UTC Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.255054 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.255126 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.255137 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.255153 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.255163 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.277010 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.285115 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:20 crc kubenswrapper[4959]: E0121 13:09:20.285287 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.315052 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.355245 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.357819 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.357860 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.357870 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.357886 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.357897 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.411231 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.460359 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.460404 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.460415 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.460430 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.460440 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.460505 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.475811 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.485406 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerStarted","Data":"70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.488069 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.514626 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.552934 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.565145 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.565205 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.565219 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.565244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.565259 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.592210 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.645680 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.668322 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.668357 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.668366 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.668383 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.668394 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.675490 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.718080 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.757197 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.771735 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.771797 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.771810 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.771830 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.771842 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.794992 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.834723 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.874307 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.874359 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.874371 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.874390 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.874403 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.884815 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.920216 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.955166 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:20Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.959610 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.959727 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:20 crc kubenswrapper[4959]: E0121 13:09:20.959875 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:20 crc kubenswrapper[4959]: E0121 13:09:20.959883 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:09:24.959852019 +0000 UTC m=+25.922882562 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:20 crc kubenswrapper[4959]: E0121 13:09:20.959960 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:24.959938581 +0000 UTC m=+25.922969204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.976934 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.977247 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.977403 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.977480 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:20 crc kubenswrapper[4959]: I0121 13:09:20.977497 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:20Z","lastTransitionTime":"2026-01-21T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.008313 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.034048 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.060812 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.060858 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.060891 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061020 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061037 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061034 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061047 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061062 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061065 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061131 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:25.061111173 +0000 UTC m=+26.024141736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061076 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061175 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:25.061154404 +0000 UTC m=+26.024184967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.061289 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:25.061253117 +0000 UTC m=+26.024283660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.077660 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.079496 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.079534 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.079544 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.079559 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.079570 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.116218 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.157053 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.182709 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.182818 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.182861 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.182930 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.182955 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.201548 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.233317 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.243628 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:59:19.355564649 +0000 UTC Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.275693 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285329 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.285462 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285505 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:21 crc kubenswrapper[4959]: E0121 13:09:21.285640 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285845 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285875 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285887 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285903 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.285916 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.312811 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.353453 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.388053 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.388129 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.388145 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.388168 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.388181 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.397499 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.433086 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.474976 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.492346 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.492399 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.492412 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.492433 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.492446 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.495551 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.514747 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.557694 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:21Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.596311 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.596378 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.596391 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.596412 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.596425 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.699980 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.700037 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.700053 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.700073 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.700091 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.802563 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.802604 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.802613 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.802629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.802641 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.907037 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.907128 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.907143 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.907162 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:21 crc kubenswrapper[4959]: I0121 13:09:21.907179 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:21Z","lastTransitionTime":"2026-01-21T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.014302 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.014363 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.014372 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.014389 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.014401 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.117971 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.118017 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.118027 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.118043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.118054 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.220983 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.221037 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.221050 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.221070 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.221083 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.244219 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:37:57.413406084 +0000 UTC Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.287768 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:22 crc kubenswrapper[4959]: E0121 13:09:22.292438 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.324157 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.324201 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.324213 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.324232 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.324246 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.427676 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.427719 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.427728 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.427744 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.427754 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.502815 4959 generic.go:334] "Generic (PLEG): container finished" podID="342f1ad8-984e-41bd-acca-edad9366e45d" containerID="70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259" exitCode=0 Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.502952 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerDied","Data":"70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.508671 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.508730 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.510131 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.525903 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.530812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.530868 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.530880 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.530901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.530913 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.542593 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.555970 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.572079 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.591431 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.606428 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.623602 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.633216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.633265 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.633276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.633297 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.633308 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.639522 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.654467 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.670299 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.691650 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.707854 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.727250 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.735939 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.735995 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.736005 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.736023 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.736035 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.746458 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.767911 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.780407 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.802403 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.816740 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.836855 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.838429 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.838490 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.838510 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.838541 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.838562 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.849717 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.864387 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.881401 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.895649 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.919238 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.933082 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.941401 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.941471 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.941483 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.941506 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.941521 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:22Z","lastTransitionTime":"2026-01-21T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.948944 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.972683 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:22 crc kubenswrapper[4959]: I0121 13:09:22.991512 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.007141 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.022953 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.044153 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.044226 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.044242 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.044266 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.044281 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.147950 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.148022 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.148036 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.148055 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.148069 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.244714 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:01:48.07196392 +0000 UTC Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.251703 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.251727 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.251735 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.251748 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.251758 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.285516 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:23 crc kubenswrapper[4959]: E0121 13:09:23.285714 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.286222 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:23 crc kubenswrapper[4959]: E0121 13:09:23.286356 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.353298 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.353335 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.353345 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.353360 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.353370 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.456154 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.456221 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.456236 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.456261 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.456276 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.517503 4959 generic.go:334] "Generic (PLEG): container finished" podID="342f1ad8-984e-41bd-acca-edad9366e45d" containerID="537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea" exitCode=0 Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.517563 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerDied","Data":"537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.535833 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.555001 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.559633 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.559687 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.559699 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.559723 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.559739 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.570430 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.585226 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.614358 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.630591 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.646269 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.661577 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.663828 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.663862 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.663872 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.663893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.663906 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.673932 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.689253 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.704141 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.718525 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.732480 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.753145 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.766029 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.766080 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.766109 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.766161 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.766194 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.768398 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:23Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.869258 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.869704 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.869899 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.870208 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.870439 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.974745 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.974894 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.974923 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.974955 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:23 crc kubenswrapper[4959]: I0121 13:09:23.974978 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:23Z","lastTransitionTime":"2026-01-21T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.077952 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.078030 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.078052 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.078072 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.078085 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.181411 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.181456 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.181466 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.181482 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.181491 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.245854 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:48:43.214382244 +0000 UTC Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.283785 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.283837 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.283848 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.283865 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.283877 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.286201 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:24 crc kubenswrapper[4959]: E0121 13:09:24.286399 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.386346 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.386397 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.386408 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.386424 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.386437 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.489406 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.489458 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.489468 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.489487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.489498 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.525316 4959 generic.go:334] "Generic (PLEG): container finished" podID="342f1ad8-984e-41bd-acca-edad9366e45d" containerID="869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd" exitCode=0 Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.525375 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerDied","Data":"869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.545309 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.559427 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.573758 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.592578 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.593447 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.593569 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.593612 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.593631 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.594640 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.618484 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.634752 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.649810 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.666395 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.680845 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.695392 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.697780 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.697824 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.697834 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.697854 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.697863 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.724823 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.738243 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.753467 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.767009 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.800647 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.800690 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.800700 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.800719 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.800733 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.816986 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.903561 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.903614 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.903623 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.903643 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:24 crc kubenswrapper[4959]: I0121 13:09:24.903660 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:24Z","lastTransitionTime":"2026-01-21T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.001601 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.001815 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.001932 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:09:33.001865456 +0000 UTC m=+33.964896039 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.001982 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.002124 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:33.002080841 +0000 UTC m=+33.965111384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.005864 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.005918 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.005929 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.005950 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.005963 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.102572 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.102649 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.102681 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.102848 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.102868 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.102882 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.102958 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.103056 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.103112 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.103129 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.102972 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:33.102942946 +0000 UTC m=+34.065973489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.103282 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:33.103198642 +0000 UTC m=+34.066229195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.103310 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:33.103299305 +0000 UTC m=+34.066329868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.114062 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.114192 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.114223 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.114262 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.114285 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.217370 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.217427 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.217439 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.217458 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.217474 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.246053 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:54:43.64641328 +0000 UTC Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.285559 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.285766 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.285846 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:25 crc kubenswrapper[4959]: E0121 13:09:25.285996 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.321435 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.321485 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.321496 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.321513 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.321523 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.424753 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.424827 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.424848 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.424870 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.424885 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.529614 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.529684 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.529706 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.529750 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.529770 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.535553 4959 generic.go:334] "Generic (PLEG): container finished" podID="342f1ad8-984e-41bd-acca-edad9366e45d" containerID="15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3" exitCode=0 Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.535664 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerDied","Data":"15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.545306 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.559117 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.596176 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.610777 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.628892 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.635729 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.635781 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.635794 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.635817 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.635833 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.645304 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.657806 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.669705 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.688223 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.704410 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.720326 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.732130 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.738609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.738652 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.738663 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.738686 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.738700 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.744455 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.755470 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.767919 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.782729 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:25Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.841760 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.841815 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.841832 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.841860 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.841883 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.944139 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.944176 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.944185 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.944199 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:25 crc kubenswrapper[4959]: I0121 13:09:25.944208 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:25Z","lastTransitionTime":"2026-01-21T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.047316 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.047368 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.047378 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.047397 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.047411 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.150958 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.151431 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.151443 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.151465 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.151475 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.246665 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:59:26.111649836 +0000 UTC Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.254011 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.254044 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.254055 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.254076 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.254088 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.285333 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:26 crc kubenswrapper[4959]: E0121 13:09:26.285498 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.357032 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.357063 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.357485 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.357612 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.357626 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.466518 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.466557 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.466566 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.466582 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.466592 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.551818 4959 generic.go:334] "Generic (PLEG): container finished" podID="342f1ad8-984e-41bd-acca-edad9366e45d" containerID="ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69" exitCode=0 Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.552160 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerDied","Data":"ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.571012 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.571180 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.571255 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.571351 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.571428 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.575228 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.601568 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.627257 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.644807 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.659877 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.675162 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.675226 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.675242 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.675300 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.675320 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.679614 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.696049 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.710271 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.732049 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.747681 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.759926 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.773668 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.781811 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.781851 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.781864 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.781888 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.781902 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.787610 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.807800 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.823923 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.885806 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.885855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.885864 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.885881 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.885891 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.988554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.988590 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.988603 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.988620 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:26 crc kubenswrapper[4959]: I0121 13:09:26.988635 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:26Z","lastTransitionTime":"2026-01-21T13:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.091246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.091739 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.091757 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.091781 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.091795 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.194793 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.194827 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.194839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.194859 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.194873 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.247704 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:01:39.055650426 +0000 UTC Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.286178 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:27 crc kubenswrapper[4959]: E0121 13:09:27.286399 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.286474 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:27 crc kubenswrapper[4959]: E0121 13:09:27.286637 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.297280 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.297327 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.297340 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.297360 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.297374 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.400614 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.400676 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.400692 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.400716 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.400731 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.504741 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.504827 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.504841 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.504864 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.504874 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.560672 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.561163 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.561232 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.561256 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.566755 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" event={"ID":"342f1ad8-984e-41bd-acca-edad9366e45d","Type":"ContainerStarted","Data":"267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.577900 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.587668 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.588491 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.596190 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.607656 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.607702 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.607713 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.607732 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.607743 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.611249 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.624860 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.636574 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.649586 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.671126 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.687932 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.701190 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.710438 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.710488 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.710497 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.710522 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.710532 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.714241 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.727507 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.741588 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.763679 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.780683 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.795316 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.808014 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.813031 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.813076 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.813087 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.813135 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.813173 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.822836 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.835846 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.850143 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.865357 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.888851 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.904222 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.916192 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.916246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.916258 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.916280 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.916294 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.919198 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.934948 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.955332 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.972574 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.976968 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.977045 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.977059 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.977080 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.977113 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.984905 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: E0121 13:09:27.989708 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.993502 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.993542 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.993553 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.993574 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.993587 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:27Z","lastTransitionTime":"2026-01-21T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:27 crc kubenswrapper[4959]: I0121 13:09:27.998348 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:27Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: E0121 13:09:28.006335 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.010977 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.011031 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.011046 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.011068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.011082 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.015291 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: E0121 13:09:28.027863 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.032047 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.032106 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.032120 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.032140 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.032154 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.039909 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: E0121 13:09:28.044182 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.048282 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.048340 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.048357 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.048383 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.048395 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: E0121 13:09:28.060844 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:28Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:28 crc kubenswrapper[4959]: E0121 13:09:28.061040 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.063112 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.063145 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.063159 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.063180 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.063195 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.166173 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.166215 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.166224 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.166242 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.166253 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.248660 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:22:03.457122868 +0000 UTC Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.269435 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.269492 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.269506 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.269526 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.269540 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.285929 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:28 crc kubenswrapper[4959]: E0121 13:09:28.286149 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.372407 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.372462 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.372470 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.372488 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.372498 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.475819 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.475873 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.475883 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.475902 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.475913 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.579084 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.579185 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.579207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.579237 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.579252 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.683405 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.683485 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.683507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.683543 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.683563 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.787581 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.787631 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.787643 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.787665 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.787679 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.890763 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.890842 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.890886 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.890915 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.890942 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.994539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.994588 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.994600 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.994623 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:28 crc kubenswrapper[4959]: I0121 13:09:28.994657 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:28Z","lastTransitionTime":"2026-01-21T13:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.098278 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.098343 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.098362 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.098393 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.098413 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.202427 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.202482 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.202495 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.202545 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.202559 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.249614 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:57:34.178935563 +0000 UTC Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.285764 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.285820 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:29 crc kubenswrapper[4959]: E0121 13:09:29.285973 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:29 crc kubenswrapper[4959]: E0121 13:09:29.286129 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.302819 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.305523 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.305577 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.305586 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.305607 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.305617 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.320798 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.337532 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.351669 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.364641 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.395992 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.407692 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.407754 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.407771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.407797 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.407815 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.414886 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.431028 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.445496 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.461476 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.475431 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.490016 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.505377 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.510369 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.510452 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.510471 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.510505 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.510524 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.520333 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.543222 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.613596 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.613870 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.613933 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.614022 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.614134 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.716801 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.717158 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.717252 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.717322 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.717390 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.821068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.821172 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.821185 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.821207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.821221 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.923891 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.923931 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.923943 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.923962 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:29 crc kubenswrapper[4959]: I0121 13:09:29.923975 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:29Z","lastTransitionTime":"2026-01-21T13:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.026866 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.026901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.026912 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.026927 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.026936 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.129394 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.129453 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.129467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.129489 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.129503 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.232368 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.232434 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.232445 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.232470 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.232483 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.250792 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:12:39.382028529 +0000 UTC Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.285658 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:30 crc kubenswrapper[4959]: E0121 13:09:30.285867 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.335701 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.335750 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.335762 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.335780 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.335791 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.438895 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.438942 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.438955 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.438974 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.438984 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.541666 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.541720 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.541737 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.541768 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.541786 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.582378 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/0.log" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.586243 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4" exitCode=1 Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.586317 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.587137 4959 scope.go:117] "RemoveContainer" containerID="db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.602486 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.625598 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.641610 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.646133 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.646513 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.646662 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.646755 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.646842 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.658597 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.676553 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.691041 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.712933 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.729679 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.745141 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.749982 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.750024 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.750035 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.750049 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.750060 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.760213 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.773997 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.788354 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.805387 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.818630 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.829986 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.853234 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.853291 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.853301 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.853323 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.853335 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.872616 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g"] Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.873226 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.875351 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.875999 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.887987 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.904582 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.925563 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.942334 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.956149 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.956205 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.956214 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.956230 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.956240 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:30Z","lastTransitionTime":"2026-01-21T13:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.957782 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.967327 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.967393 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.967437 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.967461 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrbs\" (UniqueName: \"kubernetes.io/projected/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-kube-api-access-qkrbs\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.969837 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.980052 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:30 crc kubenswrapper[4959]: I0121 13:09:30.991268 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:30Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.008245 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.021650 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.037173 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.052244 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.059803 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.061540 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.061585 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.061625 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.061665 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.068074 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.068211 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrbs\" (UniqueName: \"kubernetes.io/projected/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-kube-api-access-qkrbs\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.068651 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.068874 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.069088 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.069226 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.069572 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.080700 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.087260 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrbs\" (UniqueName: \"kubernetes.io/projected/ab27e9ee-7556-4ae0-ab20-e7a689b15e7d-kube-api-access-qkrbs\") pod \"ovnkube-control-plane-749d76644c-w9q9g\" (UID: \"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.087273 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.103313 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.116154 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.165213 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.165270 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.165283 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.165304 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.165594 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.187275 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.251685 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:18:48.565604361 +0000 UTC Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.270810 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.271294 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.271308 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.271336 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.271350 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.285458 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:31 crc kubenswrapper[4959]: E0121 13:09:31.285639 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.286284 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:31 crc kubenswrapper[4959]: E0121 13:09:31.286376 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.374806 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.374849 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.374858 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.374873 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.374888 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.478534 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.478567 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.478577 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.478594 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.478605 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.601857 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.601901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.601910 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.601925 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.601935 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.624983 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" event={"ID":"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d","Type":"ContainerStarted","Data":"6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.625056 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" event={"ID":"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d","Type":"ContainerStarted","Data":"da287ce22d35ceee0b7ce64a07c830d80e997e0d0b3d2156450273bbc581afc6"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.627429 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/0.log" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.640379 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.641251 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.675303 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.692760 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.704399 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.704490 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.704509 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.704531 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.704546 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.709501 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.722660 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.733229 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.744453 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.768786 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.781413 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.793362 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.807739 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.807793 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.807812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.807842 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.807860 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.813228 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.826040 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.844734 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.860039 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.881153 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.905514 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.916416 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.916490 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.916515 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.916558 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.916587 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:31Z","lastTransitionTime":"2026-01-21T13:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:31 crc kubenswrapper[4959]: I0121 13:09:31.934323 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:31Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.012282 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6mzgn"] Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.012822 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: E0121 13:09:32.013049 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.020502 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.020541 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.020552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.020568 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.020582 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.033210 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.056772 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.075795 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.080957 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.080996 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d4gv\" (UniqueName: \"kubernetes.io/projected/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-kube-api-access-5d4gv\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.101167 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.123507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.123573 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.123591 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.123618 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.123635 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.126766 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.150474 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.169088 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.182240 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.182289 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d4gv\" (UniqueName: \"kubernetes.io/projected/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-kube-api-access-5d4gv\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: E0121 13:09:32.182509 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:32 crc kubenswrapper[4959]: E0121 13:09:32.182617 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:32.682591249 +0000 UTC m=+33.645621792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.184681 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.197251 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.214865 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d4gv\" (UniqueName: \"kubernetes.io/projected/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-kube-api-access-5d4gv\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.222280 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.227039 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.227075 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.227088 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.227127 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.227144 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.237604 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.252792 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:26:16.567769832 +0000 UTC Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.254707 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.268286 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.286266 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:32 crc kubenswrapper[4959]: E0121 13:09:32.286582 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.287220 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.304551 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.322682 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.330174 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.330236 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.330256 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.330294 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.330320 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.347294 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.433297 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.433622 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.433690 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.433778 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.433861 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.536274 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.536318 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.536330 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.536348 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.536362 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.640408 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.640507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.640539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.640583 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.640614 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.647087 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" event={"ID":"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d","Type":"ContainerStarted","Data":"d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.687846 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:32 crc kubenswrapper[4959]: E0121 13:09:32.687993 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:32 crc kubenswrapper[4959]: E0121 13:09:32.688071 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:33.688050212 +0000 UTC m=+34.651080775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.743323 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.743372 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.743386 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.743405 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.743418 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.846866 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.847238 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.847337 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.847431 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.847507 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.950521 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.950576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.950590 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.950609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:32 crc kubenswrapper[4959]: I0121 13:09:32.950624 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:32Z","lastTransitionTime":"2026-01-21T13:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.053890 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.053937 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.053950 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.053968 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.053986 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.092279 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.092402 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.092459 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:09:49.092436654 +0000 UTC m=+50.055467197 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.092467 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.092514 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:49.092504796 +0000 UTC m=+50.055535339 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.157765 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.157812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.157822 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.157839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.157851 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.193499 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.193595 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.193631 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193701 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193758 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193781 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193780 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193803 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193817 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193866 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:49.193842003 +0000 UTC m=+50.156872556 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193889 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:49.193880794 +0000 UTC m=+50.156911337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.193899 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.194061 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:49.194025817 +0000 UTC m=+50.157056400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.253846 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:25:08.94693212 +0000 UTC Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.259874 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.259927 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.259944 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.259969 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.259993 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.285869 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.285969 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.286032 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.286150 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.362239 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.362287 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.362298 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.362316 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.362328 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.464902 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.464938 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.464949 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.464965 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.464975 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.568168 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.568212 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.568221 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.568236 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.568245 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.671069 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.671185 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.671206 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.671231 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.671250 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.672142 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.687110 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.697353 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.698429 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: E0121 13:09:33.698681 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:35.698632118 +0000 UTC m=+36.661662861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.702836 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.721582 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.736386 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.751259 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.774355 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.774428 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.774449 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.774481 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.774502 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.791200 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.810132 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.830161 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.843897 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.865090 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.878253 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.878319 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.878336 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.878364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.878382 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.886699 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.905563 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.925787 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.945873 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.962258 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.981402 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.981441 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.981452 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.981468 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.981479 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:33Z","lastTransitionTime":"2026-01-21T13:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:33 crc kubenswrapper[4959]: I0121 13:09:33.989557 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:33Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.084071 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.084528 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.084538 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.084553 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.084566 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.187038 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.187127 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.187144 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.187163 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.187177 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.254573 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:50:47.544665139 +0000 UTC Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.286020 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.286035 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:34 crc kubenswrapper[4959]: E0121 13:09:34.286164 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:34 crc kubenswrapper[4959]: E0121 13:09:34.286301 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.290527 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.290607 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.290629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.290659 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.290677 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.394005 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.394054 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.394066 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.394083 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.394119 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.497459 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.497535 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.497550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.497574 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.497586 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.600406 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.600456 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.600468 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.600485 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.600497 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.658868 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/1.log" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.659779 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/0.log" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.663217 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e" exitCode=1 Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.663300 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.663404 4959 scope.go:117] "RemoveContainer" containerID="db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.664323 4959 scope.go:117] "RemoveContainer" containerID="9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e" Jan 21 13:09:34 crc kubenswrapper[4959]: E0121 13:09:34.664514 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.682202 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.694981 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.703081 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.703149 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.703158 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.703176 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.703188 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.710196 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.728689 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.746752 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.761776 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.797503 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.806117 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.806171 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.806184 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.806208 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.806220 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.815014 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.832071 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.844063 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.858191 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.875308 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.875578 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.893637 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.910054 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.910149 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.910178 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.910199 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.910214 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:34Z","lastTransitionTime":"2026-01-21T13:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.917472 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.934533 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.952830 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.980234 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:34 crc kubenswrapper[4959]: I0121 13:09:34.997923 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:34Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.012724 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.012806 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.012829 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.012862 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.012886 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.012978 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.025189 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.035971 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.064183 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.078067 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.096421 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.118361 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.118459 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.118362 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.118487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.118674 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.118762 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.140527 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.157449 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.180319 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.216160 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.221547 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.221818 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.221925 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.222044 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.222179 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.233756 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.251478 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.254789 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:23:33.218317083 +0000 UTC Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.266085 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.283446 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.285683 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.285683 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:35 crc kubenswrapper[4959]: E0121 13:09:35.285911 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:35 crc kubenswrapper[4959]: E0121 13:09:35.285967 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.301827 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:35Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.324834 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.324881 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.324894 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.324909 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.324921 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.428342 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.428747 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.428870 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.429003 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.429144 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.532216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.532276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.532285 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.532305 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.532315 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.634602 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.634897 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.634994 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.635171 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.635264 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.668742 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/1.log" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.721735 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:35 crc kubenswrapper[4959]: E0121 13:09:35.721930 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:35 crc kubenswrapper[4959]: E0121 13:09:35.722015 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:39.721996046 +0000 UTC m=+40.685026589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.738081 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.738164 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.738182 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.738205 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.738217 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.840857 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.840939 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.840953 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.840971 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.840995 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.943710 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.943762 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.943773 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.943791 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:35 crc kubenswrapper[4959]: I0121 13:09:35.943809 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:35Z","lastTransitionTime":"2026-01-21T13:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.046165 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.046204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.046214 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.046228 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.046237 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.149451 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.149514 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.149532 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.149618 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.149646 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.254265 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.254661 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.254791 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.254874 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.254934 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.256758 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:34:05.675446094 +0000 UTC Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.286112 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.286289 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:36 crc kubenswrapper[4959]: E0121 13:09:36.286546 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:36 crc kubenswrapper[4959]: E0121 13:09:36.286889 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.359043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.359202 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.359229 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.359264 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.359289 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.463501 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.463973 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.464168 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.464477 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.464661 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.567655 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.567729 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.567749 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.567781 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.567806 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.671204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.671592 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.671684 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.671775 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.672385 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.775799 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.776165 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.776281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.776363 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.776423 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.879044 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.879114 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.879125 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.879139 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.879150 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.981839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.981879 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.981889 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.981907 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:36 crc kubenswrapper[4959]: I0121 13:09:36.981920 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:36Z","lastTransitionTime":"2026-01-21T13:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.084069 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.084129 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.084140 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.084154 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.084168 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.187486 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.187767 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.187844 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.187980 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.188067 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.257822 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:55:23.513418095 +0000 UTC Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.285444 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:37 crc kubenswrapper[4959]: E0121 13:09:37.285602 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.286050 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:37 crc kubenswrapper[4959]: E0121 13:09:37.286488 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.290480 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.290590 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.290618 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.290646 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.290671 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.393543 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.393639 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.393655 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.393678 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.393693 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.496370 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.496407 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.496417 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.496434 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.496445 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.600279 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.600356 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.600379 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.600409 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.600430 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.703529 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.703583 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.703592 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.703607 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.703618 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.806175 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.806227 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.806237 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.806251 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.806262 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.908861 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.908903 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.908913 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.908930 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:37 crc kubenswrapper[4959]: I0121 13:09:37.908942 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:37Z","lastTransitionTime":"2026-01-21T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.131946 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.131998 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.132007 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.132026 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.132039 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.170157 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.170204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.170212 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.170228 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.170237 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.183314 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.188153 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.188203 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.188218 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.188236 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.188249 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.204340 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.212972 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.213237 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.213351 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.213431 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.213489 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.227177 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.231707 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.231868 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.231969 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.232051 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.232182 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.245241 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.248647 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.248803 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.248941 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.249048 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.249149 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.257949 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:31:16.577173394 +0000 UTC Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.260913 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:38Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.261507 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.263398 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.263498 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.263564 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.263632 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.263791 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.285753 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.285893 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.285764 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:38 crc kubenswrapper[4959]: E0121 13:09:38.286288 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.366554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.366600 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.366607 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.366621 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.366631 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.469845 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.469901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.469917 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.469939 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.469956 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.572631 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.572691 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.572709 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.572746 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.572764 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.675924 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.676013 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.676026 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.676042 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.676077 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.778808 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.778877 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.778890 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.778905 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.778914 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.881205 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.881246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.881256 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.881271 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.881284 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.983658 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.983694 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.983705 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.983717 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:38 crc kubenswrapper[4959]: I0121 13:09:38.983726 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:38Z","lastTransitionTime":"2026-01-21T13:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.086035 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.086076 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.086086 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.086120 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.086132 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.189113 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.189149 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.189157 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.189171 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.189184 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.258520 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:05:24.357612686 +0000 UTC Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.286139 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.286342 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:39 crc kubenswrapper[4959]: E0121 13:09:39.286449 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:39 crc kubenswrapper[4959]: E0121 13:09:39.286568 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.292764 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.292823 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.292856 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.292878 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.292894 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.303399 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.319621 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.335911 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.356929 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db3b159e428c6f41a8718036ba670d98c1e6aa831b332581503123a1fe8d48a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"message\\\":\\\"NetworkPolicy event handler 4\\\\nI0121 13:09:28.924246 6283 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924511 6283 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.924696 6283 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925150 6283 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:28.925266 6283 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925316 6283 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:09:28.925456 6283 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:09:28.926050 6283 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.371629 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.389232 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.394801 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.394844 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.394857 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.394881 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.394895 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.401584 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.412676 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.424280 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.438520 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.450621 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.464606 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.505359 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.505406 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.505417 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.505432 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.505444 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.515260 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.530953 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.544153 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.565470 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.577594 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.608604 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.608636 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.608644 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.608660 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.608670 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.711459 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.711497 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.711507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.711524 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.711534 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.764755 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:39 crc kubenswrapper[4959]: E0121 13:09:39.764965 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:39 crc kubenswrapper[4959]: E0121 13:09:39.765090 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:09:47.765063149 +0000 UTC m=+48.728093902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.815068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.815170 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.815189 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.815219 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.815240 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.918384 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.918496 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.918521 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.918553 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:39 crc kubenswrapper[4959]: I0121 13:09:39.918576 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:39Z","lastTransitionTime":"2026-01-21T13:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.022124 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.022182 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.022198 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.022218 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.022232 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.125315 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.125369 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.125383 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.125406 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.125420 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.228762 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.228820 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.228834 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.228855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.228869 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.259523 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:49:17.556237798 +0000 UTC Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.286201 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:40 crc kubenswrapper[4959]: E0121 13:09:40.286358 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.286459 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:40 crc kubenswrapper[4959]: E0121 13:09:40.286709 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.330869 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.330909 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.330921 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.330937 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.330948 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.433536 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.433579 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.433595 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.433613 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.433625 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.535893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.535982 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.535997 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.536019 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.536033 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.638972 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.639014 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.639028 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.639044 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.639056 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.741904 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.741951 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.741962 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.741979 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.742005 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.844921 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.844981 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.844992 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.845013 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.845029 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.948336 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.948392 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.948402 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.948420 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:40 crc kubenswrapper[4959]: I0121 13:09:40.948432 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:40Z","lastTransitionTime":"2026-01-21T13:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.051782 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.051829 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.051845 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.051865 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.051875 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.154640 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.154689 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.154702 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.154725 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.154738 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.257138 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.257192 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.257206 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.257281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.257298 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.260319 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:31:23.728363635 +0000 UTC Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.285774 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.285920 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:41 crc kubenswrapper[4959]: E0121 13:09:41.286063 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:41 crc kubenswrapper[4959]: E0121 13:09:41.286201 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.359685 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.359738 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.359753 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.359771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.359782 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.462585 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.462637 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.462650 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.462668 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.462684 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.566990 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.567062 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.567081 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.567260 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.567294 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.670569 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.670618 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.670631 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.670652 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.670668 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.774455 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.774504 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.774514 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.774537 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.774548 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.878237 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.878295 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.878308 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.878332 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.878345 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.981915 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.981978 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.981996 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.982018 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:41 crc kubenswrapper[4959]: I0121 13:09:41.982030 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:41Z","lastTransitionTime":"2026-01-21T13:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.084763 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.084817 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.084829 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.084855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.084869 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.187809 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.187882 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.187897 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.187923 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.187942 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.260589 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:07:19.160124554 +0000 UTC Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.285284 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.285343 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:42 crc kubenswrapper[4959]: E0121 13:09:42.285475 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:42 crc kubenswrapper[4959]: E0121 13:09:42.285606 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.291043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.291078 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.291088 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.291125 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.291137 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.393491 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.393556 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.393572 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.393595 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.393609 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.496637 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.496720 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.496734 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.496754 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.496769 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.599998 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.600074 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.600086 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.600145 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.600157 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.702727 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.702771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.702784 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.702800 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.702816 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.805676 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.805710 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.805718 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.805734 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.805742 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.909170 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.909236 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.909252 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.909282 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:42 crc kubenswrapper[4959]: I0121 13:09:42.909303 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:42Z","lastTransitionTime":"2026-01-21T13:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.012134 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.012202 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.012215 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.012231 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.012243 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.115639 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.115707 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.115723 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.115741 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.115754 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.218438 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.218482 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.218494 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.218511 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.218523 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.261058 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:27:56.704965952 +0000 UTC Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.285643 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.285697 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:43 crc kubenswrapper[4959]: E0121 13:09:43.285787 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:43 crc kubenswrapper[4959]: E0121 13:09:43.285972 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.320952 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.320990 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.320999 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.321012 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.321022 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.423270 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.423305 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.423312 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.423328 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.423342 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.526600 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.526656 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.526668 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.526705 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.526729 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.629460 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.629533 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.629548 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.629571 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.629586 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.732968 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.733394 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.733467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.733554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.733622 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.837832 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.837883 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.837893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.837911 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.837923 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.940619 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.940664 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.940676 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.940693 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:43 crc kubenswrapper[4959]: I0121 13:09:43.940706 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:43Z","lastTransitionTime":"2026-01-21T13:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.042748 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.042807 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.042820 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.042837 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.042851 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.146214 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.146541 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.146610 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.146675 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.146743 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.249347 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.249696 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.249787 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.249851 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.249919 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.261739 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:15:30.96404619 +0000 UTC Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.285539 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:44 crc kubenswrapper[4959]: E0121 13:09:44.285763 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.285567 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:44 crc kubenswrapper[4959]: E0121 13:09:44.286177 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.353286 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.353351 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.353370 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.353396 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.353448 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.457088 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.457192 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.457207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.457229 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.457265 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.562340 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.562433 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.562454 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.562478 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.562494 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.664838 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.664878 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.664888 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.664901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.664912 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.767964 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.768007 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.768018 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.768039 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.768051 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.870707 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.870755 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.870765 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.870785 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.870795 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.974276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.974338 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.974356 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.974381 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:44 crc kubenswrapper[4959]: I0121 13:09:44.974401 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:44Z","lastTransitionTime":"2026-01-21T13:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.078136 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.078215 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.078236 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.078271 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.078292 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.182296 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.182366 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.182383 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.182414 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.182627 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.262625 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:18:21.407106511 +0000 UTC Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.285570 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.285651 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:45 crc kubenswrapper[4959]: E0121 13:09:45.285740 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:45 crc kubenswrapper[4959]: E0121 13:09:45.285814 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.287467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.287544 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.287568 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.287594 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.287667 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.392329 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.392728 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.392822 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.392963 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.393143 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.496276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.497552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.497761 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.497913 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.498127 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.601520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.601558 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.601569 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.601588 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.601600 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.704178 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.704543 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.704616 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.704743 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.704818 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.807925 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.807969 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.807978 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.807993 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.808003 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.910995 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.911454 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.911610 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.911702 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:45 crc kubenswrapper[4959]: I0121 13:09:45.911813 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:45Z","lastTransitionTime":"2026-01-21T13:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.015198 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.015244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.015256 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.015276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.015289 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.120858 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.121430 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.121580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.121683 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.121775 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.224542 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.224858 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.224949 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.225042 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.225146 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.263544 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:54:25.402798138 +0000 UTC Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.285881 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:46 crc kubenswrapper[4959]: E0121 13:09:46.286122 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.286441 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:46 crc kubenswrapper[4959]: E0121 13:09:46.286679 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.327417 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.327469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.327483 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.327502 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.327515 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.430005 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.430059 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.430072 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.430089 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.430122 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.532697 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.532749 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.532762 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.532778 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.532791 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.635927 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.635980 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.635990 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.636005 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.636016 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.739373 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.739532 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.739560 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.739590 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.739608 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.842302 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.842355 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.842366 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.842383 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.842394 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.945138 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.945190 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.945203 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.945219 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:46 crc kubenswrapper[4959]: I0121 13:09:46.945231 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:46Z","lastTransitionTime":"2026-01-21T13:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.048752 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.048806 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.048819 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.048837 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.048847 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.151482 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.151558 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.151580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.151613 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.151636 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.254339 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.254382 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.254393 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.254409 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.254605 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.263763 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:07:15.363549161 +0000 UTC Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.285411 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:47 crc kubenswrapper[4959]: E0121 13:09:47.285541 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.285419 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:47 crc kubenswrapper[4959]: E0121 13:09:47.285743 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.358172 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.358225 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.358240 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.358262 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.358280 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.461216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.461274 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.461285 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.461312 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.461328 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.565462 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.565575 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.565587 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.565607 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.565622 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.668478 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.668521 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.668530 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.668546 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.668555 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.771246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.771283 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.771297 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.771325 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.771339 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.859869 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:47 crc kubenswrapper[4959]: E0121 13:09:47.860195 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:47 crc kubenswrapper[4959]: E0121 13:09:47.860335 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:10:03.860303779 +0000 UTC m=+64.823334332 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.874981 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.875041 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.875061 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.875090 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.875154 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.978520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.978588 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.978601 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.978620 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:47 crc kubenswrapper[4959]: I0121 13:09:47.978634 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:47Z","lastTransitionTime":"2026-01-21T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.082167 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.082230 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.082239 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.082255 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.082265 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.186051 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.186181 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.186209 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.186244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.186269 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.264452 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:28:07.957330905 +0000 UTC Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.286013 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.286022 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.286418 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.286591 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.286813 4959 scope.go:117] "RemoveContainer" containerID="9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.289202 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.289231 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.289239 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.289255 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.289267 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.300328 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.327739 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.345692 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.374115 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.388220 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.391837 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.391984 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.392114 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.392219 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.392289 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.405649 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.428190 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.443140 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.456347 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.468891 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.493211 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.495611 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.495688 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.495709 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.495739 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.495757 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.508064 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.523740 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.536702 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.539110 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.539155 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.539165 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.539183 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.539195 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.552417 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.552424 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.556965 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.557021 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.557033 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.557056 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.557071 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.568262 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.571232 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.575315 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.575354 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.575368 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.575387 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.575399 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.583572 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.587542 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.591691 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.591901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.591975 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.592058 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.592174 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.604311 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.608230 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.608268 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.608282 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.608301 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.608315 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.622607 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: E0121 13:09:48.622824 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.624822 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.624871 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.624884 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.624905 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.624923 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.719363 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/1.log" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.724416 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.725071 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.729555 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.729920 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.729952 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.729994 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.730016 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.743555 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.763517 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.780569 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.794482 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.806321 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.816363 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.837396 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.837474 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.837496 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.837543 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.837568 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.834208 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.851751 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.864503 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.876672 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.889161 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.906774 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.923489 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.940823 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.940863 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.940875 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.940894 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.940906 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:48Z","lastTransitionTime":"2026-01-21T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.944992 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.960063 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:48 crc kubenswrapper[4959]: I0121 13:09:48.981378 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.004891 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.043378 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.043428 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.043436 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.043457 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.043466 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.145882 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.145936 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.145949 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.145968 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.145980 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.174671 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.174838 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.174909 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.174955 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:10:21.174925775 +0000 UTC m=+82.137956318 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.174982 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:10:21.174971237 +0000 UTC m=+82.138001780 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.248551 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.248593 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.248602 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.248621 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.248631 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.264920 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:53:57.106290375 +0000 UTC Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.275483 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.275523 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.275551 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275720 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275763 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275755 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275871 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:10:21.275846099 +0000 UTC m=+82.238876642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275779 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275958 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:10:21.275940061 +0000 UTC m=+82.238970604 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275720 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275980 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.275987 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.276009 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:10:21.276003593 +0000 UTC m=+82.239034136 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.285146 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.285147 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.285335 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:49 crc kubenswrapper[4959]: E0121 13:09:49.285462 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.301008 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.313022 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.325466 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.342284 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.351521 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.351574 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.351586 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.351601 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.351612 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.363202 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.381893 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.396040 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.410838 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.425786 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.441583 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.454999 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.455572 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.455863 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.455877 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.455900 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.455911 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.474998 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.489752 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.502407 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.514065 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.525774 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.539778 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.558324 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.558384 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.558397 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.558420 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.558433 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.662262 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.662318 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.662328 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.662348 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.662359 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.764907 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.764938 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.764948 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.764961 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.764971 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.867709 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.867766 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.867778 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.867797 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.867810 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.970629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.970790 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.970824 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.970855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:49 crc kubenswrapper[4959]: I0121 13:09:49.970878 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:49Z","lastTransitionTime":"2026-01-21T13:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.073598 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.073635 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.073648 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.073664 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.073676 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.176688 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.176730 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.176739 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.176755 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.176766 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.265796 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:11:39.471184151 +0000 UTC Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.279529 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.279554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.279564 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.279576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.279584 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.286006 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.286056 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:50 crc kubenswrapper[4959]: E0121 13:09:50.286115 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:50 crc kubenswrapper[4959]: E0121 13:09:50.286240 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.382166 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.382207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.382218 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.382232 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.382243 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.485223 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.485288 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.485298 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.485335 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.485351 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.589217 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.589316 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.589337 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.589368 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.589386 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.692431 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.692479 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.692489 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.692507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.692518 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.732958 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/2.log" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.733509 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/1.log" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.735767 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890" exitCode=1 Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.735806 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.735848 4959 scope.go:117] "RemoveContainer" containerID="9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.736844 4959 scope.go:117] "RemoveContainer" containerID="278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890" Jan 21 13:09:50 crc kubenswrapper[4959]: E0121 13:09:50.737020 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.759223 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.775381 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.788186 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.805976 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.806017 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.806028 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.806043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.806055 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.808570 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.827646 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.846023 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.857928 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.864865 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.870109 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.883989 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.899878 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.914287 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.914356 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.914378 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.914401 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.914414 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:50Z","lastTransitionTime":"2026-01-21T13:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.929794 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.946597 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.960988 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.977702 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:50 crc kubenswrapper[4959]: I0121 13:09:50.991640 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.003780 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.017304 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.017979 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.018025 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.018038 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.018059 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.018074 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.032122 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.046266 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.063058 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.074761 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.086846 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.104644 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.120871 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.121404 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.121466 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.121476 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.121491 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.121520 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.133964 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.144492 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.157393 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.170383 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.186310 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.197769 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.211801 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.223995 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.224059 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.224072 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.224110 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.224125 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.224722 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.238686 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.252902 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.266458 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:20:06.828207006 +0000 UTC Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.273728 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.285405 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.285461 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:51 crc kubenswrapper[4959]: E0121 13:09:51.285561 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:51 crc kubenswrapper[4959]: E0121 13:09:51.285792 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.290220 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.327836 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.327911 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.327953 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.327977 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.327993 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.431783 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.431842 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.431861 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.431887 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.431904 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.535242 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.535354 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.535374 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.535403 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.535424 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.639364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.639453 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.639471 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.639493 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.639513 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.748272 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.748320 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.748333 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.748354 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.748366 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.750058 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/2.log" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.851873 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.851927 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.851937 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.851953 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.851965 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.955438 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.955481 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.955492 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.955510 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:51 crc kubenswrapper[4959]: I0121 13:09:51.955521 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:51Z","lastTransitionTime":"2026-01-21T13:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.059024 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.059175 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.059195 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.059256 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.059274 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.162243 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.162285 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.162296 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.162312 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.162326 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.265915 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.265958 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.265972 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.265991 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.266004 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.266722 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:57:21.147003718 +0000 UTC Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.285925 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.286057 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:52 crc kubenswrapper[4959]: E0121 13:09:52.286154 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:52 crc kubenswrapper[4959]: E0121 13:09:52.286282 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.369090 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.369183 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.369196 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.369216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.369232 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.472523 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.472786 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.472812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.472833 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.472844 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.576020 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.576063 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.576077 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.576115 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.576131 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.679191 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.679251 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.679264 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.679284 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.679296 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.781463 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.781512 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.781525 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.781544 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.781555 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.884614 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.884709 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.884731 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.884760 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.884779 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.987905 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.988015 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.988045 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.988080 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:52 crc kubenswrapper[4959]: I0121 13:09:52.988147 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:52Z","lastTransitionTime":"2026-01-21T13:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.092450 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.092544 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.092580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.092615 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.092637 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.196589 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.196663 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.196680 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.196706 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.196727 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.267673 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:47:36.847378053 +0000 UTC Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.286212 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:53 crc kubenswrapper[4959]: E0121 13:09:53.286464 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.286535 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:53 crc kubenswrapper[4959]: E0121 13:09:53.286724 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.300858 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.300917 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.300935 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.300961 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.300982 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.404982 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.405388 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.405684 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.405831 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.405974 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.509884 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.509953 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.509967 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.509987 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.509999 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.613750 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.614434 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.614611 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.614748 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.614886 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.723324 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.723438 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.723467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.723509 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.723665 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.827435 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.827491 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.827503 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.827523 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.827536 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.930790 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.930863 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.930881 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.930910 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:53 crc kubenswrapper[4959]: I0121 13:09:53.930930 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:53Z","lastTransitionTime":"2026-01-21T13:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.034392 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.034483 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.034513 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.034545 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.034571 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.137742 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.137801 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.137820 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.137840 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.137857 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.241539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.241623 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.241648 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.241681 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.241700 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.268506 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:12:17.239305958 +0000 UTC Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.286071 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.286164 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:54 crc kubenswrapper[4959]: E0121 13:09:54.286286 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:54 crc kubenswrapper[4959]: E0121 13:09:54.286404 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.344915 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.344961 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.344972 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.344990 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.345004 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.447900 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.447940 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.447958 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.447978 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.447989 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.551845 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.551942 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.551974 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.552012 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.552039 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.655540 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.655593 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.655609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.655627 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.655640 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.758453 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.758499 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.758510 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.758528 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.758541 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.862792 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.862898 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.862921 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.862959 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.862983 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.966422 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.966501 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.966519 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.966548 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:54 crc kubenswrapper[4959]: I0121 13:09:54.966567 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:54Z","lastTransitionTime":"2026-01-21T13:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.070350 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.070414 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.070425 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.070447 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.070462 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.173833 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.174279 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.174377 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.174466 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.174545 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.268790 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:00:57.592467147 +0000 UTC Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.277285 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.277332 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.277344 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.277364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.277376 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.285808 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:55 crc kubenswrapper[4959]: E0121 13:09:55.286028 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.285812 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:55 crc kubenswrapper[4959]: E0121 13:09:55.286262 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.380454 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.380499 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.380508 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.380526 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.380536 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.483471 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.483528 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.483539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.483557 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.483567 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.586729 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.589351 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.589373 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.589417 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.589436 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.693183 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.693275 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.693337 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.693376 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.693404 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.800884 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.801388 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.802078 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.802364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.802922 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.907788 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.908629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.908750 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.908878 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:55 crc kubenswrapper[4959]: I0121 13:09:55.908983 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:55Z","lastTransitionTime":"2026-01-21T13:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.013523 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.014179 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.014210 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.014245 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.014270 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.117797 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.117836 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.117849 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.117868 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.117881 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.221563 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.221656 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.221677 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.221704 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.221718 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.269241 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:53:41.827282408 +0000 UTC Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.285707 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.285756 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:56 crc kubenswrapper[4959]: E0121 13:09:56.285860 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:56 crc kubenswrapper[4959]: E0121 13:09:56.285975 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.324820 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.324880 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.324897 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.324923 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.324942 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.428608 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.429045 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.429477 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.429681 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.429839 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.532627 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.532684 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.532697 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.532726 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.532740 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.635936 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.636246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.636346 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.636410 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.636472 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.739280 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.739341 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.739360 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.739389 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.739408 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.843185 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.843301 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.843362 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.843394 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.843452 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.947591 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.947673 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.947694 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.947721 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:56 crc kubenswrapper[4959]: I0121 13:09:56.947739 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:56Z","lastTransitionTime":"2026-01-21T13:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.052289 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.052352 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.052388 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.052411 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.052425 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.155698 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.155747 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.155760 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.155781 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.155794 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.259032 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.259086 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.259116 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.259144 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.259156 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.269424 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:13:52.097573885 +0000 UTC Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.286086 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.286192 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:57 crc kubenswrapper[4959]: E0121 13:09:57.286248 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:57 crc kubenswrapper[4959]: E0121 13:09:57.286348 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.363000 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.363625 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.363820 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.363962 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.364079 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.467783 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.468329 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.468544 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.468698 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.468844 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.572734 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.573068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.573204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.573286 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.573316 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.677580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.677660 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.677682 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.677710 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.677732 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.780418 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.780501 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.780522 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.780557 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.780579 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.882916 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.882976 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.882994 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.883016 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.883032 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.986411 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.987052 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.987190 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.987281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:57 crc kubenswrapper[4959]: I0121 13:09:57.987358 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:57Z","lastTransitionTime":"2026-01-21T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.091528 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.091581 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.091593 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.091610 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.091620 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.194206 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.194266 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.194278 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.194300 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.194313 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.270583 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:49:57.188267483 +0000 UTC Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.286000 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.286036 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:09:58 crc kubenswrapper[4959]: E0121 13:09:58.286197 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:09:58 crc kubenswrapper[4959]: E0121 13:09:58.286295 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.297255 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.297327 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.297342 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.297367 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.297383 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.399709 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.399752 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.399769 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.399790 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.399802 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.503223 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.503299 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.503318 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.503343 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.503360 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.606615 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.606675 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.606693 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.606721 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.606740 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.711649 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.711695 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.711704 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.711720 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.711734 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.814412 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.814462 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.814472 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.814489 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.814501 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.905068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.905131 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.905140 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.905158 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.905170 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: E0121 13:09:58.922798 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.928647 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.928719 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.928733 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.928757 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.928771 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: E0121 13:09:58.948528 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.953729 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.953797 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.953817 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.953843 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.953864 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: E0121 13:09:58.970539 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.974997 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.975035 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.975044 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.975061 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.975075 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:58 crc kubenswrapper[4959]: E0121 13:09:58.988315 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.993528 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.993590 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.993609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.993639 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:58 crc kubenswrapper[4959]: I0121 13:09:58.993660 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:58Z","lastTransitionTime":"2026-01-21T13:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: E0121 13:09:59.016817 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: E0121 13:09:59.017032 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.018981 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.019028 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.019043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.019068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.019086 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.122564 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.122620 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.122630 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.122651 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.122663 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.225968 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.226034 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.226053 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.226082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.226132 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.271821 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:55:11.073211919 +0000 UTC Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.285561 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.285609 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:09:59 crc kubenswrapper[4959]: E0121 13:09:59.285736 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:09:59 crc kubenswrapper[4959]: E0121 13:09:59.285966 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.305233 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.327467 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.329249 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.329287 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.329299 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.329319 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.329333 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.379698 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9108cec30de5b9cb4d6fbf22f0cff33e35e3a5ab452356863e60809986b6ab4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"message\\\":\\\"tor/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 13:09:32.879499 6424 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-26tbg\\\\nI0121 13:09:32.879505 6424 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0121 13:09:32.879434 6424 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, S\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.401408 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.422290 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.432733 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.432787 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.432804 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.432827 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.432851 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.441827 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.454246 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.469164 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.484401 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.498490 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.514444 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.528850 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.535409 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.535460 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.535471 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.535490 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.535501 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.544709 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.557926 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.578828 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.595903 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.607533 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.619234 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:09:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.638111 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.638194 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.638216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.638246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.638271 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.741917 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.741978 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.741991 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.742011 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.742026 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.845550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.845600 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.845614 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.845633 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.845647 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.949652 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.949710 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.949727 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.949756 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:09:59 crc kubenswrapper[4959]: I0121 13:09:59.949776 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:09:59Z","lastTransitionTime":"2026-01-21T13:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.053331 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.053387 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.053404 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.053429 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.053449 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.157520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.157586 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.157609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.157638 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.157662 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.261416 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.261520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.261547 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.261571 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.261586 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.272720 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:23:00.018654058 +0000 UTC Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.286134 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.286224 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:00 crc kubenswrapper[4959]: E0121 13:10:00.286318 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:00 crc kubenswrapper[4959]: E0121 13:10:00.286396 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.365720 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.365781 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.365799 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.365905 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.365976 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.468949 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.468998 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.469016 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.469041 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.469058 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.571864 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.571919 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.572281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.572315 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.572330 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.675386 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.675445 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.675463 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.675489 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.675508 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.778027 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.778078 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.778089 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.778132 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.778147 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.880969 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.881032 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.881048 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.881079 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.881125 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.988246 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.988388 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.988416 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.988474 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:00 crc kubenswrapper[4959]: I0121 13:10:00.988498 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:00Z","lastTransitionTime":"2026-01-21T13:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.092624 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.092683 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.092695 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.092713 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.092727 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.195402 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.195452 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.195464 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.195484 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.195497 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.273864 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:39:18.43932034 +0000 UTC Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.285245 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.285338 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:01 crc kubenswrapper[4959]: E0121 13:10:01.285458 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:01 crc kubenswrapper[4959]: E0121 13:10:01.285550 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.299475 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.299541 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.299554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.299577 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.299624 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.404028 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.404146 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.404227 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.404259 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.404273 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.509243 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.509300 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.509309 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.509328 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.509339 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.612998 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.613059 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.613075 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.613129 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.613151 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.717158 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.717247 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.717276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.717312 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.717338 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.820702 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.820766 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.820779 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.820800 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.820812 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.924264 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.924324 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.924334 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.924370 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:01 crc kubenswrapper[4959]: I0121 13:10:01.924381 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:01Z","lastTransitionTime":"2026-01-21T13:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.027285 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.027345 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.027360 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.027386 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.027401 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.130323 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.130432 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.130468 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.130523 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.130569 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.234323 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.234382 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.234403 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.234430 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.234447 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.274237 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:12:01.426930106 +0000 UTC Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.285516 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:02 crc kubenswrapper[4959]: E0121 13:10:02.285681 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.286546 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.286728 4959 scope.go:117] "RemoveContainer" containerID="278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890" Jan 21 13:10:02 crc kubenswrapper[4959]: E0121 13:10:02.286750 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:02 crc kubenswrapper[4959]: E0121 13:10:02.287048 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.303302 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.318229 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.333513 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.337355 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.337411 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.337423 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.337446 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.337459 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.346256 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.356915 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.367757 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.385762 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.398652 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.412270 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.423319 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.435365 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.439492 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.439516 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.439524 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.439539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.439548 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.450800 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.466582 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.493848 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.508829 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.522583 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.538927 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.541839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.541878 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.541893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.541912 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.541923 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.550535 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:02Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.645304 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.645393 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.645420 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.645503 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.645531 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.748362 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.748427 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.748445 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.748469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.748487 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.851786 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.851845 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.851857 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.851876 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.851889 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.955387 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.955453 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.955475 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.955507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:02 crc kubenswrapper[4959]: I0121 13:10:02.955529 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:02Z","lastTransitionTime":"2026-01-21T13:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.058617 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.058668 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.058680 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.058701 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.058714 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.162032 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.162118 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.162137 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.162189 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.162208 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.265694 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.265775 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.265811 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.265832 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.265845 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.275107 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:23:19.980529741 +0000 UTC Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.285725 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.285827 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:03 crc kubenswrapper[4959]: E0121 13:10:03.286013 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:03 crc kubenswrapper[4959]: E0121 13:10:03.286183 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.370281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.370340 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.370354 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.370373 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.370388 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.474128 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.474197 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.474218 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.474247 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.474266 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.577414 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.577509 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.577526 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.577557 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.577576 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.681043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.681147 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.681168 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.681197 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.681218 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.784325 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.784375 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.784387 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.784407 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.784421 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.887944 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.887999 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.888012 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.888035 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.888058 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.961358 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:03 crc kubenswrapper[4959]: E0121 13:10:03.961570 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:10:03 crc kubenswrapper[4959]: E0121 13:10:03.961658 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:10:35.961636065 +0000 UTC m=+96.924666598 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.990426 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.990462 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.990475 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.990490 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:03 crc kubenswrapper[4959]: I0121 13:10:03.990501 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:03Z","lastTransitionTime":"2026-01-21T13:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.093671 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.093721 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.093735 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.093756 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.093770 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.196483 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.196563 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.196576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.196602 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.196616 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.275682 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:52:47.828834868 +0000 UTC Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.286022 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.286022 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:04 crc kubenswrapper[4959]: E0121 13:10:04.286192 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:04 crc kubenswrapper[4959]: E0121 13:10:04.286220 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.300878 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.301123 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.301135 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.301156 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.301169 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.403585 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.403638 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.403648 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.403667 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.403679 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.506268 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.506315 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.506330 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.506347 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.506358 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.609312 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.609370 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.609387 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.609407 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.609418 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.712704 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.712807 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.712819 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.712835 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.712847 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.806528 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/0.log" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.806592 4959 generic.go:334] "Generic (PLEG): container finished" podID="867d68b2-3803-46b0-b974-62ec7ee89b49" containerID="7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe" exitCode=1 Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.806638 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerDied","Data":"7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.807165 4959 scope.go:117] "RemoveContainer" containerID="7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.815371 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.815413 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.815450 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.815469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.815484 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.823480 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.842912 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.860038 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.874445 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.888181 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.903017 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.918723 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.918789 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.918808 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.918830 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.918873 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:04Z","lastTransitionTime":"2026-01-21T13:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.920565 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.943089 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.957830 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.973620 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:04 crc kubenswrapper[4959]: I0121 13:10:04.987499 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:04.999931 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.015386 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.021147 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.021276 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.021374 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.021463 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.021531 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.030192 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.044416 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.058582 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.070429 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.085951 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.124384 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.124428 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.124439 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.124461 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.124473 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.226973 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.227692 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.227822 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.227932 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.228040 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.276292 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:49:55.1040247 +0000 UTC Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.286016 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.286121 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:05 crc kubenswrapper[4959]: E0121 13:10:05.286717 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:05 crc kubenswrapper[4959]: E0121 13:10:05.286719 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.330838 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.330908 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.330920 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.330936 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.330947 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.434132 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.434168 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.434177 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.434193 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.434204 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.536645 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.536748 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.536762 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.536783 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.536797 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.639869 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.639919 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.639930 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.639947 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.639958 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.743111 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.743157 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.743167 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.743184 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.743195 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.812186 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/0.log" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.812252 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerStarted","Data":"4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.835017 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.847463 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.847516 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.847526 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.847542 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.847554 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.849713 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.860893 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.872781 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.881950 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.894813 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.910913 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.924963 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.938317 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.952301 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.952355 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.952367 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.952385 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.952398 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:05Z","lastTransitionTime":"2026-01-21T13:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.952876 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.964499 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.983376 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:05 crc kubenswrapper[4959]: I0121 13:10:05.998998 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.013310 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.029017 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.042978 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.055849 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.055926 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.055941 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.055967 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.055984 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.060125 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.078521 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.159564 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.159640 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.159655 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.159673 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.159684 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.263193 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.263300 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.263330 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.263367 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.263392 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.277652 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:23:17.846908649 +0000 UTC Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.285397 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.285511 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:06 crc kubenswrapper[4959]: E0121 13:10:06.285610 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:06 crc kubenswrapper[4959]: E0121 13:10:06.285709 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.366143 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.366184 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.366195 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.366210 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.366220 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.469570 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.469643 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.469663 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.469692 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.469711 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.572295 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.572349 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.572364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.572386 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.572401 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.675869 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.675922 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.675934 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.675953 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.675967 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.779809 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.779856 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.779867 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.779886 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.779900 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.882513 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.882558 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.882572 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.882586 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.882595 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.985935 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.986000 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.986020 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.986047 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:06 crc kubenswrapper[4959]: I0121 13:10:06.986064 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:06Z","lastTransitionTime":"2026-01-21T13:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.088044 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.088401 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.088482 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.088549 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.088616 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.190831 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.190880 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.190893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.190934 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.190950 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.278011 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:15:53.680978082 +0000 UTC Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.285468 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.285685 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:07 crc kubenswrapper[4959]: E0121 13:10:07.285770 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:07 crc kubenswrapper[4959]: E0121 13:10:07.285939 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.292851 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.292885 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.292896 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.292916 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.292930 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.396580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.396647 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.396659 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.396679 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.396691 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.499082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.499189 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.499214 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.499248 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.499271 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.602170 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.602211 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.602224 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.602244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.602260 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.704992 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.705078 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.705105 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.705129 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.705148 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.807638 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.808013 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.808247 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.808363 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.808449 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.911500 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.911579 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.911603 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.911631 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:07 crc kubenswrapper[4959]: I0121 13:10:07.911648 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:07Z","lastTransitionTime":"2026-01-21T13:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.014380 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.014703 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.014839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.014934 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.015015 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.117818 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.117880 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.117893 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.117914 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.117930 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.221690 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.221734 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.221744 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.221765 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.221779 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.278835 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:40:50.619481215 +0000 UTC Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.285201 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.285213 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:08 crc kubenswrapper[4959]: E0121 13:10:08.285524 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:08 crc kubenswrapper[4959]: E0121 13:10:08.285654 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.324766 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.324825 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.324843 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.324871 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.324890 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.428009 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.428066 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.428075 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.428108 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.428119 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.531343 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.531399 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.531415 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.531445 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.531460 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.634037 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.634188 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.634207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.634228 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.634242 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.737422 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.737758 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.737854 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.737948 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.738042 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.841019 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.841081 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.841153 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.841181 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.841200 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.944413 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.944479 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.944494 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.944516 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:08 crc kubenswrapper[4959]: I0121 13:10:08.944532 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:08Z","lastTransitionTime":"2026-01-21T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.048494 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.048574 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.048588 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.048606 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.048618 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.151319 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.151371 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.151380 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.151396 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.151408 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.192676 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.192962 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.193076 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.193255 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.193381 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.205837 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.210233 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.210376 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.210474 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.210552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.210622 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.223842 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.228698 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.228755 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.228765 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.228787 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.228799 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.242008 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.246525 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.246580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.246599 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.246624 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.246644 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.261338 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.265912 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.265956 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.265977 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.265996 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.266006 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.278464 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.278576 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.278958 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:59:04.737437625 +0000 UTC Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.280855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.280887 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.280901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.280919 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.280931 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.285177 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.285234 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.285285 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:09 crc kubenswrapper[4959]: E0121 13:10:09.285535 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.302682 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.319522 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.333296 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.346023 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.356790 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.369331 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.383186 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.383420 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.383481 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.383598 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.383667 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.390670 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.404515 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.421264 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.433445 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.443562 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.455602 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.468281 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.487030 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.487124 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.487140 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.487187 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.487203 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.489865 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.506056 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.517336 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.533648 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.545696 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.590504 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.590541 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.590550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.590566 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.590576 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.693062 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.693132 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.693146 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.693168 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.693183 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.796231 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.796271 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.796279 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.796296 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.796307 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.898879 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.898927 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.898943 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.898965 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:09 crc kubenswrapper[4959]: I0121 13:10:09.898978 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:09Z","lastTransitionTime":"2026-01-21T13:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.002063 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.002131 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.002152 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.002204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.002220 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.105631 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.105971 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.106049 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.106186 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.106327 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.209019 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.209071 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.209082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.209102 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.209131 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.280162 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:44:31.140678608 +0000 UTC Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.285413 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:10 crc kubenswrapper[4959]: E0121 13:10:10.285595 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.285708 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:10 crc kubenswrapper[4959]: E0121 13:10:10.286036 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.300175 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.311792 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.311850 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.311861 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.311879 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.311892 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.414771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.414849 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.414864 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.414884 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.414900 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.516924 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.516984 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.517006 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.517029 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.517044 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.620361 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.620404 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.620415 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.620432 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.620443 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.723487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.723542 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.723552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.723572 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.723583 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.826357 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.826441 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.826466 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.826497 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.826524 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.929520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.929575 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.929587 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.929605 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:10 crc kubenswrapper[4959]: I0121 13:10:10.929617 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:10Z","lastTransitionTime":"2026-01-21T13:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.032073 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.032161 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.032179 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.032207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.032234 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.135305 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.135358 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.135376 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.135403 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.135422 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.238859 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.238931 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.238951 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.238980 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.239000 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.281748 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:00:51.723927953 +0000 UTC Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.286258 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.286347 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:11 crc kubenswrapper[4959]: E0121 13:10:11.286433 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:11 crc kubenswrapper[4959]: E0121 13:10:11.286619 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.341912 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.341973 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.341997 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.342029 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.342053 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.447460 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.447505 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.447516 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.447536 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.447550 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.550349 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.550398 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.550414 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.550434 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.550448 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.653443 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.653536 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.653550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.653573 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.653587 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.755867 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.755925 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.755935 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.755953 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.755963 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.858395 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.858447 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.858457 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.858476 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.858489 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.961349 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.961393 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.961403 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.961418 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:11 crc kubenswrapper[4959]: I0121 13:10:11.961428 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:11Z","lastTransitionTime":"2026-01-21T13:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.064591 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.064665 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.064688 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.064719 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.064742 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.167578 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.167679 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.167705 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.167742 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.167765 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.270659 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.270719 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.270734 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.270755 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.270768 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.282129 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:53:18.530969869 +0000 UTC Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.285536 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.285589 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:12 crc kubenswrapper[4959]: E0121 13:10:12.285711 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:12 crc kubenswrapper[4959]: E0121 13:10:12.285833 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.373130 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.373177 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.373188 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.373203 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.373212 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.476278 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.476320 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.476330 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.476348 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.476361 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.578683 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.578732 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.578743 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.578757 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.578767 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.682315 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.682376 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.682386 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.682406 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.682419 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.786055 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.786137 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.786149 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.786172 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.786186 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.889792 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.889832 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.889841 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.889858 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.889868 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.993027 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.993165 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.993192 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.993223 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:12 crc kubenswrapper[4959]: I0121 13:10:12.993249 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:12Z","lastTransitionTime":"2026-01-21T13:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.096639 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.096723 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.096806 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.096833 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.096852 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.200960 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.201038 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.201050 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.201107 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.201122 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.283051 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:40:24.517928599 +0000 UTC Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.285426 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:13 crc kubenswrapper[4959]: E0121 13:10:13.285666 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.285816 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:13 crc kubenswrapper[4959]: E0121 13:10:13.286456 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.286906 4959 scope.go:117] "RemoveContainer" containerID="278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.318225 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.318280 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.318299 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.318329 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.318351 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.421969 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.422028 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.422046 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.422070 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.422088 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.524981 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.525040 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.525049 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.525067 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.525077 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.628492 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.628565 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.628578 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.628598 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.628640 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.734805 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.734862 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.734952 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.734984 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.734997 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.838580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.838684 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.838711 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.838748 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.838798 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.941011 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.941576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.941596 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.941627 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:13 crc kubenswrapper[4959]: I0121 13:10:13.941648 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:13Z","lastTransitionTime":"2026-01-21T13:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.044652 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.044719 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.044739 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.044765 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.044786 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.147757 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.147811 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.147824 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.147839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.147850 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.250907 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.250944 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.250954 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.250969 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.250980 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.283718 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:52:29.684269383 +0000 UTC Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.285109 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.285110 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:14 crc kubenswrapper[4959]: E0121 13:10:14.285267 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:14 crc kubenswrapper[4959]: E0121 13:10:14.285349 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.354117 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.354161 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.354169 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.354191 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.354202 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.456736 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.456788 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.456800 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.456825 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.456839 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.560009 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.560056 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.560068 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.560084 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.560099 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.663611 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.663689 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.663701 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.663731 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.663743 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.766959 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.767015 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.767029 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.767051 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.767065 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.843651 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/2.log" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.847867 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.848522 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.864169 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.870515 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.870573 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.870591 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.870619 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.870637 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.879547 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a78ea28-1ec8-4ba0-a349-c5773247de59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5d1bfcfe736816e9159afa54c9c980d283c7103298d6950005c9e50e8840f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.898212 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.915788 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.938667 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.953943 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.971380 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.973135 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.973161 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.973172 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.973190 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.973201 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:14Z","lastTransitionTime":"2026-01-21T13:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:14 crc kubenswrapper[4959]: I0121 13:10:14.996849 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.013937 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.024460 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.035722 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.046932 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.062288 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.076150 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.076183 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.076194 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.076215 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.076230 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.077318 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.092767 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.104099 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.116730 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.129376 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.161023 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.179443 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.179520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.179538 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.179564 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.179583 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.283300 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.283352 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.283363 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.283387 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.283403 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.284139 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:39:02.884718146 +0000 UTC Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.285672 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.285814 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:15 crc kubenswrapper[4959]: E0121 13:10:15.285949 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:15 crc kubenswrapper[4959]: E0121 13:10:15.286240 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.386968 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.387021 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.387030 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.387050 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.387061 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.490480 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.490540 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.490552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.490576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.490590 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.594024 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.594134 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.594153 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.594180 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.594198 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.697343 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.697401 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.697412 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.697430 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.697442 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.800741 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.800811 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.800827 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.800847 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.800859 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.854212 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/3.log" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.855423 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/2.log" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.859472 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" exitCode=1 Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.859773 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.859833 4959 scope.go:117] "RemoveContainer" containerID="278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.861030 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:10:15 crc kubenswrapper[4959]: E0121 13:10:15.861440 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.884205 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.900872 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.902967 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.903015 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.903026 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.903058 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.903072 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:15Z","lastTransitionTime":"2026-01-21T13:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.916640 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.929541 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.943354 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.961798 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.977464 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:15 crc kubenswrapper[4959]: I0121 13:10:15.995167 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:15Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.005501 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.005535 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.005549 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.005565 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.005579 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.012067 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.025066 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.037662 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.054427 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://278aa1acbe2f89d0b6bfa0783adf09510e9902ba5dd549eb5df4c7ce3095c890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:09:49Z\\\",\\\"message\\\":\\\"0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471635 6637 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471792 6637 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.471944 6637 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472120 6637 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:09:49.472620 6637 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 13:09:49.472646 6637 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 13:09:49.472654 6637 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 13:09:49.472678 6637 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 13:09:49.472710 6637 factory.go:656] Stopping watch factory\\\\nI0121 13:09:49.472728 6637 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:09:49.472727 6637 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 13:09:49.472735 6637 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 13:09:49.472751 6637 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:14Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 13:10:14.654376 7035 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 13:10:14.654375 7035 factory.go:656] Stopping watch factory\\\\nI0121 13:10:14.654394 7035 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 13:10:14.654529 7035 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI0121 13:10:14.654561 7035 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 2.284921ms\\\\nI0121 13:10:14.654752 7035 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 13:10:14.654842 7035 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 13:10:14.654880 7035 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:10:14.654892 7035 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0121 13:10:14.654928 7035 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 1.760453ms\\\\nI0121 13:10:14.654908 7035 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 13:10:14.655011 7035 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:10:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.065105 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.076910 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a78ea28-1ec8-4ba0-a349-c5773247de59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5d1bfcfe736816e9159afa54c9c980d283c7103298d6950005c9e50e8840f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.090864 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.101672 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.109489 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.109592 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.109609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.109629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.109641 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.118173 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.136997 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.152177 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.212927 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.212992 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.213018 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.213052 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.213078 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.284267 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:18:59.683636955 +0000 UTC Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.285557 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.285637 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:16 crc kubenswrapper[4959]: E0121 13:10:16.285734 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:16 crc kubenswrapper[4959]: E0121 13:10:16.285818 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.316378 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.316429 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.316440 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.316459 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.316469 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.422457 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.422527 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.422543 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.422567 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.422583 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.525343 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.525418 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.525436 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.525467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.525487 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.629186 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.629272 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.629291 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.629320 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.629339 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.732244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.732320 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.732338 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.732369 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.732389 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.835793 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.835866 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.835884 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.835910 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.835927 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.866257 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/3.log" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.876376 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:10:16 crc kubenswrapper[4959]: E0121 13:10:16.876590 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.907000 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:14Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 13:10:14.654376 7035 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 13:10:14.654375 7035 factory.go:656] Stopping watch factory\\\\nI0121 13:10:14.654394 7035 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 13:10:14.654529 7035 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI0121 13:10:14.654561 7035 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 2.284921ms\\\\nI0121 13:10:14.654752 7035 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 13:10:14.654842 7035 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 13:10:14.654880 7035 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:10:14.654892 7035 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0121 13:10:14.654928 7035 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 1.760453ms\\\\nI0121 13:10:14.654908 7035 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 13:10:14.655011 7035 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.923656 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.939528 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.939757 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.939768 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.939786 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.939800 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:16Z","lastTransitionTime":"2026-01-21T13:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.941440 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.953043 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.963597 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.978353 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:16 crc kubenswrapper[4959]: I0121 13:10:16.994257 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:16Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.007075 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.026846 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a78ea28-1ec8-4ba0-a349-c5773247de59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5d1bfcfe736816e9159afa54c9c980d283c7103298d6950005c9e50e8840f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.043527 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.043583 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.043609 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.043644 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.043670 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.050615 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.067998 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.083321 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.101034 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.126640 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.145779 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.146418 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.146469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.146484 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.146509 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.146523 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.160388 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.176733 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.192574 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.210438 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:17Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.248502 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.248586 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.248627 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.248648 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.248664 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.284573 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:29:25.667301368 +0000 UTC Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.285867 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.285898 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:17 crc kubenswrapper[4959]: E0121 13:10:17.286074 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:17 crc kubenswrapper[4959]: E0121 13:10:17.286248 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.350812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.350868 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.350883 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.350902 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.350914 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.453466 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.453511 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.453521 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.453539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.453550 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.557244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.557309 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.557332 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.557364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.557385 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.661062 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.661179 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.661207 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.661238 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.661260 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.765664 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.765720 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.765737 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.765763 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.765783 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.869559 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.869809 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.869834 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.869866 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.869893 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.973689 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.973730 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.973742 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.973762 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:17 crc kubenswrapper[4959]: I0121 13:10:17.973774 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:17Z","lastTransitionTime":"2026-01-21T13:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.076964 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.077069 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.077143 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.077180 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.077217 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.180262 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.180316 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.180336 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.180360 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.180372 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.283229 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.283364 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.283386 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.283411 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.283432 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.285882 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:14:01.866636734 +0000 UTC Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.285954 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.285982 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:18 crc kubenswrapper[4959]: E0121 13:10:18.286205 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:18 crc kubenswrapper[4959]: E0121 13:10:18.286425 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.387368 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.387455 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.387491 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.387522 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.387545 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.492937 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.493004 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.493023 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.493050 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.493069 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.596389 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.596456 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.596467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.596506 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.596522 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.699973 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.700073 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.700091 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.700208 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.700236 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.803571 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.804956 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.805020 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.805059 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.805088 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.908332 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.908402 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.908423 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.908451 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:18 crc kubenswrapper[4959]: I0121 13:10:18.908471 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:18Z","lastTransitionTime":"2026-01-21T13:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.012441 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.012507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.012525 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.012551 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.012570 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.116496 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.116580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.116595 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.116616 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.116628 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.219520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.219600 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.219613 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.219633 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.219649 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.285810 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.285975 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.285977 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:19:05.923890592 +0000 UTC Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.287122 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.287255 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.301686 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d296177f-f010-4aac-9f26-89062b061f6e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c157d8a5eb5829c0a5e20b2dae9b375bb28c1aa32408351b23918885cd27fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0f8dd3ab3a7bc019dcdf179145ca958b98bdcb10bca5f78c0ed714cde8869df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af2fde2b50ab34f7304bc7254f98034bba39340b47da77c20842186ecf490a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6db000753e5ac39fb9044fc4879f8964f7931c697378dd9dce6c612c608453d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.318131 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://536977b065b068c255789eb01542f06293304da3734748be6835d34ce97bd3e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.322238 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.322278 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.322287 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.322306 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.322323 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.330260 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d4gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6mzgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.359943 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29e39e42-cf95-4280-b56d-4255ca2737a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478aeaad30517bbd321c39306e311a70d80d97d01332c7f7e8d8ef1dbeb0474b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbde757cd3404423edff647a437561b00969f6fc1071e6ab99d1e408ed774f47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://649d604fd4cdfaddd806660909b273ddf5924ebd86e8f7ff010eab37b9c003b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f69eea22af627cef69712513bdf90dbc63be8dde6ed1b71ab1f98a7ac3488fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf94b4a50d79901271593c03b316a2ebcf44e03222735addb0cb5cc8d02a59f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b433c8be93737bb3f6d14b9e7fe11d822125770f5f935d912ecde25b4dd852f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73b55e9ee56167a901347ae3e67e5e161fb48ab04aa660c037fbce94dd616b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://129ae772f61a7349a964bc022bee7066578461322d8c13edb295e5756cdf7ccf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.376452 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08af37e1-90cb-4397-ab98-608ede176954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:09:11.857964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:09:11.859851 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-18571547/tls.crt::/tmp/serving-cert-18571547/tls.key\\\\\\\"\\\\nI0121 13:09:17.259861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:09:17.268021 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:09:17.268063 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:09:17.270006 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:09:17.270058 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:09:17.277392 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:09:17.277424 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:09:17.277437 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:09:17.277443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:09:17.277446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:09:17.277450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:09:17.277693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:09:17.280914 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:01Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.396609 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.417970 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"342f1ad8-984e-41bd-acca-edad9366e45d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://267d23db7d10257e5af25befd696c9b8dc64cc2553717af98b79e258737f7ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25c63a18615b52aabae528b8ec8a5301c13c81389ec2d81ee36a255c654ee94a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70414f8417e2dae5cbd02132b433350a586626beec2d101685ec536ee190e259\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537660de00c3cdfc3ad2b4a582d347b8c97fa8a67d0a542af452c9a1bb203dea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869831a237ec979d0463da8bfb2cb6930fe45fdfea8c794b288058c494352cbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15d4834b5704c919092b9c629d8894e783be28b78a9ecc0471902692186714f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed5381be43ef71fc0393141a9b35bb29075b32868b75a0f13afb329e913d7e69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mstm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tqwdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.425487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.425539 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.425550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.425571 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.425588 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.433735 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gx5vl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4768408-f881-4a09-9857-2e7580a4b1c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4422be667beacbf4b089899336ee43b8c682ef9687274fb51c0582f5ce6624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99r4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gx5vl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.449413 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26tbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e27abf2-1c58-4c8e-9f92-d3323ee8d397\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c24aad2f419b678a8b1cdd011f5c76827e5c314b5c466051513f2bff01ba9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x72h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26tbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.469055 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eea635fd-8d4a-4b77-bb58-3d778f59c79e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:14Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 13:10:14.654376 7035 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 13:10:14.654375 7035 factory.go:656] Stopping watch factory\\\\nI0121 13:10:14.654394 7035 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 13:10:14.654529 7035 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-canary/ingress-canary\\\\\\\"}\\\\nI0121 13:10:14.654561 7035 services_controller.go:360] Finished syncing service ingress-canary on namespace openshift-ingress-canary for network=default : 2.284921ms\\\\nI0121 13:10:14.654752 7035 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 13:10:14.654842 7035 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 13:10:14.654880 7035 ovnkube.go:599] Stopped ovnkube\\\\nI0121 13:10:14.654892 7035 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager-operator/metrics\\\\\\\"}\\\\nI0121 13:10:14.654928 7035 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator for network=default : 1.760453ms\\\\nI0121 13:10:14.654908 7035 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 13:10:14.655011 7035 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:10:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvkhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x7k8s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.487444 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d42113bf-edad-4ff4-87bb-69eff7dde5cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://328fdf87122642eb0fb183a41a960bdf34f44ffe92427a9e4b96031fd7e45db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ddbb2d08db4b70889ad21fbddd946922f6cf3e13f993f782f23cc74806bd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fd43a57c51715c7ebd2f4af534d9f80f2bc4dc40e2d708c88a041e60db51784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.492696 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.492735 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.492753 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.492771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.492782 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.502819 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.510246 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.515487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.515527 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.515538 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.515555 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.515568 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.520307 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0eeb3e5d8ed24c3706800ec8e0272236a261a3bcc02ef66fed8751f522ef34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.532856 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.532918 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.538829 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.538897 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.538932 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.538965 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.538983 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.547354 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d99d89-7cdc-410d-b2f3-347be806f79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dbe26b8a35cc0f7c78eb409a75947aea215a710320ffc37d21e7897254e68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmklh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwkrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.554016 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.560820 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.560865 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.560879 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.560901 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.560916 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.563712 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5zw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"867d68b2-3803-46b0-b974-62ec7ee89b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:10:04Z\\\",\\\"message\\\":\\\"2026-01-21T13:09:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d\\\\n2026-01-21T13:09:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9ae02481-474f-4bdc-bc47-126d73eda02d to /host/opt/cni/bin/\\\\n2026-01-21T13:09:19Z [verbose] multus-daemon started\\\\n2026-01-21T13:09:19Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:10:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpg6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5zw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.577019 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab27e9ee-7556-4ae0-ab20-e7a689b15e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c76293b98e6a971d889a166f43e0f6a1f54e663f5528bd6e918a1603d12815a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9471069b0ffddb13ae9c6c470a02fb8d26af47bedf11ebe7f6a585a963d02d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkrbs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:09:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-w9q9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.579449 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.586838 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.586881 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.586891 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.586921 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.586934 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.590522 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a78ea28-1ec8-4ba0-a349-c5773247de59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5d1bfcfe736816e9159afa54c9c980d283c7103298d6950005c9e50e8840f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea42301b233a70486c7bc604e3619461a5b89cde321b268572dbd9481a2a9a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:08:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.601357 4959 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:10:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"643a7796-2a45-42fa-a4a4-6600967da7c3\\\",\\\"systemUUID\\\":\\\"eb8e8451-d560-452c-bda4-2002f2e3fe0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: E0121 13:10:19.601477 4959 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.603237 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.603310 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.603327 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.603862 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.603917 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.604631 4959 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:09:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8485e67337bd62d438f91e7c8fea0d634ab039fd5e5ea35ff9c71035da031fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c8bfbc86fdb88dcf74e257bbdfeb68a5225d0dd6942c47f030bac4050f9d8eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:10:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.706270 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.706365 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.706379 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.706396 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.706409 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.810004 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.810058 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.810070 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.810088 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.810104 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.912774 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.912853 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.912874 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.912902 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:19 crc kubenswrapper[4959]: I0121 13:10:19.912923 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:19Z","lastTransitionTime":"2026-01-21T13:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.017067 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.017204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.017228 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.017256 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.017280 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.120639 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.120690 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.120704 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.120724 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.120736 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.223441 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.223506 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.223524 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.223554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.223573 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.285676 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.285676 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:20 crc kubenswrapper[4959]: E0121 13:10:20.285868 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.286209 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:36:15.175389952 +0000 UTC Jan 21 13:10:20 crc kubenswrapper[4959]: E0121 13:10:20.286226 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.326673 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.326755 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.326781 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.326817 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.326841 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.430637 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.430677 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.430686 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.430703 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.430713 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.533839 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.533889 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.533907 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.533936 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.533962 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.638024 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.638085 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.638126 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.638149 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.638163 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.740921 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.741000 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.741021 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.741051 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.741077 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.844591 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.844652 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.844669 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.844694 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.844715 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.955661 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.955746 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.955771 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.955808 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:20 crc kubenswrapper[4959]: I0121 13:10:20.955836 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:20Z","lastTransitionTime":"2026-01-21T13:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.059476 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.059537 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.059552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.059581 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.059601 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.162050 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.162148 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.162163 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.162182 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.162197 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.263947 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.264124 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:25.264064851 +0000 UTC m=+146.227095394 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.264244 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.264363 4959 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.264410 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:11:25.264402283 +0000 UTC m=+146.227432826 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.265122 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.265163 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.265173 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.265189 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.265199 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.286060 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.286186 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.286395 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:32:16.501868336 +0000 UTC Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.286485 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.286350 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.365788 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.365845 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.365915 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366034 4959 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366069 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366119 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366139 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:11:25.366081855 +0000 UTC m=+146.329112408 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366141 4959 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366197 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:11:25.366181059 +0000 UTC m=+146.329211612 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366214 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366279 4959 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366298 4959 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:10:21 crc kubenswrapper[4959]: E0121 13:10:21.366391 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:11:25.366362895 +0000 UTC m=+146.329393628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.368428 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.368469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.368483 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.368503 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.368517 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.471819 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.471879 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.471891 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.471914 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.471930 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.574749 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.574801 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.574812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.574827 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.574838 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.677590 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.677662 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.677685 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.677712 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.677730 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.780728 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.780796 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.780812 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.780836 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.780853 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.883491 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.883532 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.883540 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.883557 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.883574 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.986631 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.986701 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.986724 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.986752 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:21 crc kubenswrapper[4959]: I0121 13:10:21.986958 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:21Z","lastTransitionTime":"2026-01-21T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.090185 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.090299 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.090317 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.090345 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.090363 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.192922 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.192993 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.193004 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.193030 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.193043 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.285672 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.285747 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:22 crc kubenswrapper[4959]: E0121 13:10:22.285813 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:22 crc kubenswrapper[4959]: E0121 13:10:22.285906 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.286637 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:38:28.503605504 +0000 UTC Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.295742 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.295788 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.295801 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.295822 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.295836 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.399564 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.399608 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.399617 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.399633 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.399644 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.503065 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.503126 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.503137 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.503154 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.503168 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.606487 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.606550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.606562 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.606584 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.606600 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.710234 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.710309 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.710321 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.710341 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.710378 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.814463 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.814525 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.814541 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.814561 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.814576 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.917973 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.918052 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.918072 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.918135 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:22 crc kubenswrapper[4959]: I0121 13:10:22.918156 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:22Z","lastTransitionTime":"2026-01-21T13:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.021964 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.022045 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.022065 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.022148 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.022183 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.125785 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.126292 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.126306 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.126323 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.126335 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.231408 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.231471 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.231481 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.231499 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.231510 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.285956 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:23 crc kubenswrapper[4959]: E0121 13:10:23.286404 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.286478 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:23 crc kubenswrapper[4959]: E0121 13:10:23.286676 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.286752 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:28:16.547458755 +0000 UTC Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.335154 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.335200 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.335210 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.335229 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.335243 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.439928 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.440007 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.440021 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.440043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.440055 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.544436 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.544520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.544536 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.544562 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.544578 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.647055 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.647159 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.647173 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.647195 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.647210 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.750254 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.750312 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.750326 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.750368 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.750393 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.857958 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.858020 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.858029 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.858045 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.858056 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.961938 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.961990 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.962001 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.962022 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:23 crc kubenswrapper[4959]: I0121 13:10:23.962035 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:23Z","lastTransitionTime":"2026-01-21T13:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.064580 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.064626 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.064638 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.064655 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.064666 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.167920 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.167982 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.167994 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.168017 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.168030 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.272008 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.272069 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.272086 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.272157 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.272175 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.285518 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.285552 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:24 crc kubenswrapper[4959]: E0121 13:10:24.285684 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:24 crc kubenswrapper[4959]: E0121 13:10:24.285850 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.287700 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:15:50.997408719 +0000 UTC Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.375964 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.376034 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.376051 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.376080 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.376127 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.479584 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.479658 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.479675 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.479702 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.479721 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.583334 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.583423 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.583440 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.583467 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.583486 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.687537 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.687611 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.687629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.687656 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.687675 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.791529 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.791583 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.791595 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.791616 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.791629 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.894707 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.894769 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.894788 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.894813 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.894833 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.999122 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.999183 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.999199 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.999223 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:24 crc kubenswrapper[4959]: I0121 13:10:24.999238 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:24Z","lastTransitionTime":"2026-01-21T13:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.102925 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.103004 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.103023 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.103049 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.103070 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.206653 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.206736 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.206768 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.206798 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.206820 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.286340 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.286390 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:25 crc kubenswrapper[4959]: E0121 13:10:25.286579 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:25 crc kubenswrapper[4959]: E0121 13:10:25.286823 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.288239 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:29:17.460751141 +0000 UTC Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.309232 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.309315 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.309346 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.309377 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.309400 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.412652 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.412704 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.412715 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.412735 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.412745 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.516648 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.516714 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.516722 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.516742 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.516754 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.619465 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.619552 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.619570 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.619596 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.619614 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.722846 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.722904 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.722920 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.722944 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.722960 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.826765 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.826844 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.826870 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.826903 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.826932 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.930273 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.930346 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.930436 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.930521 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:25 crc kubenswrapper[4959]: I0121 13:10:25.930544 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:25Z","lastTransitionTime":"2026-01-21T13:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.033366 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.033412 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.033423 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.033439 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.033449 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.136576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.136675 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.136705 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.136737 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.136766 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.239671 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.239725 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.239740 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.239764 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.239778 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.286051 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.286071 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:26 crc kubenswrapper[4959]: E0121 13:10:26.286345 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:26 crc kubenswrapper[4959]: E0121 13:10:26.286489 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.288831 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:56:12.887837561 +0000 UTC Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.343272 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.343320 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.343332 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.343352 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.343366 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.446413 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.446460 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.446469 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.446486 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.446496 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.549164 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.549235 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.549244 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.549281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.549293 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.653517 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.653575 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.653587 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.653607 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.653621 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.757549 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.757624 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.757649 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.757683 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.757711 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.860481 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.860557 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.860576 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.860611 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.860632 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.964479 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.964555 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.964601 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.964635 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:26 crc kubenswrapper[4959]: I0121 13:10:26.964658 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:26Z","lastTransitionTime":"2026-01-21T13:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.068421 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.068503 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.068520 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.068546 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.068565 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.171619 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.171681 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.171699 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.171730 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.171752 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.274612 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.274914 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.275082 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.275204 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.275309 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.286401 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.286631 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:27 crc kubenswrapper[4959]: E0121 13:10:27.286761 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:27 crc kubenswrapper[4959]: E0121 13:10:27.287039 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.289288 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:25:03.350294958 +0000 UTC Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.378216 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.378392 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.378421 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.378455 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.378477 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.482365 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.482440 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.482452 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.482502 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.482515 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.586022 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.586138 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.586163 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.586192 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.586224 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.689376 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.689431 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.689449 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.689475 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.689493 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.826783 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.826830 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.826841 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.826859 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.826874 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.930252 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.930333 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.930351 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.930379 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:27 crc kubenswrapper[4959]: I0121 13:10:27.930399 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:27Z","lastTransitionTime":"2026-01-21T13:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.033506 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.033577 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.033599 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.033629 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.033651 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.136737 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.136795 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.136807 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.136825 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.136836 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.240033 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.240113 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.240130 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.240152 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.240172 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.286266 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.286355 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:28 crc kubenswrapper[4959]: E0121 13:10:28.286636 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:28 crc kubenswrapper[4959]: E0121 13:10:28.287228 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.290481 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:57:31.626608669 +0000 UTC Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.343855 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.343908 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.343918 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.343934 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.343945 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.448776 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.448911 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.448938 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.448977 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.449017 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.553274 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.553353 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.553374 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.553403 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.553423 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.657175 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.657245 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.657263 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.657294 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.657313 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.763928 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.763990 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.764006 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.764030 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.764047 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.867402 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.867470 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.867486 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.867513 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.867527 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.972281 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.972374 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.972393 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.972419 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:28 crc kubenswrapper[4959]: I0121 13:10:28.972439 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:28Z","lastTransitionTime":"2026-01-21T13:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.075049 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.075089 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.075122 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.075139 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.075149 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.178284 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.178360 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.178382 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.178411 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.178435 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.281957 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.282043 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.282069 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.282152 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.282192 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.285390 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.285390 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:29 crc kubenswrapper[4959]: E0121 13:10:29.285658 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:29 crc kubenswrapper[4959]: E0121 13:10:29.285770 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.290782 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:06:53.312864722 +0000 UTC Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.373929 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-w5zw9" podStartSLOduration=72.373902615 podStartE2EDuration="1m12.373902615s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.373902165 +0000 UTC m=+90.336932708" watchObservedRunningTime="2026-01-21 13:10:29.373902615 +0000 UTC m=+90.336933158" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.374231 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podStartSLOduration=72.374225964 podStartE2EDuration="1m12.374225964s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.354398297 +0000 UTC m=+90.317428900" watchObservedRunningTime="2026-01-21 13:10:29.374225964 +0000 UTC m=+90.337256507" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.386173 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.386533 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.386550 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.386568 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.386598 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.402203 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-w9q9g" podStartSLOduration=71.402146408 podStartE2EDuration="1m11.402146408s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.387850512 +0000 UTC m=+90.350881075" watchObservedRunningTime="2026-01-21 13:10:29.402146408 +0000 UTC m=+90.365176951" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.415752 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.415745796 podStartE2EDuration="19.415745796s" podCreationTimestamp="2026-01-21 13:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.402383465 +0000 UTC m=+90.365414018" watchObservedRunningTime="2026-01-21 13:10:29.415745796 +0000 UTC m=+90.378776339" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.429681 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.429653402 podStartE2EDuration="39.429653402s" podCreationTimestamp="2026-01-21 13:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.428672446 +0000 UTC m=+90.391702999" watchObservedRunningTime="2026-01-21 13:10:29.429653402 +0000 UTC m=+90.392683945" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.489022 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.488974756 podStartE2EDuration="1m10.488974756s" podCreationTimestamp="2026-01-21 13:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.488842992 +0000 UTC m=+90.451873545" watchObservedRunningTime="2026-01-21 13:10:29.488974756 +0000 UTC m=+90.452005299" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.490213 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.490252 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.490272 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.490288 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.490300 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.515969 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.515943555 podStartE2EDuration="1m11.515943555s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.515941145 +0000 UTC m=+90.478971688" watchObservedRunningTime="2026-01-21 13:10:29.515943555 +0000 UTC m=+90.478974098" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.561577 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tqwdg" podStartSLOduration=72.561553808 podStartE2EDuration="1m12.561553808s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.548624329 +0000 UTC m=+90.511654912" watchObservedRunningTime="2026-01-21 13:10:29.561553808 +0000 UTC m=+90.524584351" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.561807 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gx5vl" podStartSLOduration=72.561803665 podStartE2EDuration="1m12.561803665s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.561004763 +0000 UTC m=+90.524035316" watchObservedRunningTime="2026-01-21 13:10:29.561803665 +0000 UTC m=+90.524834208" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.592661 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.592703 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.592712 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.592728 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.592738 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.599015 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-26tbg" podStartSLOduration=72.59898761 podStartE2EDuration="1m12.59898761s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.575927627 +0000 UTC m=+90.538958170" watchObservedRunningTime="2026-01-21 13:10:29.59898761 +0000 UTC m=+90.562018153" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.613121 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.613104392 podStartE2EDuration="1m12.613104392s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:29.611693144 +0000 UTC m=+90.574723687" watchObservedRunningTime="2026-01-21 13:10:29.613104392 +0000 UTC m=+90.576134935" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.695425 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.695478 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.695492 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.695510 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.695526 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.753415 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.753484 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.753507 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.753554 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.753594 4959 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:10:29Z","lastTransitionTime":"2026-01-21T13:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.830069 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm"] Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.830645 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.833424 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.833892 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.833970 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.834497 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.878052 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c59a53f-9c24-43c7-b43f-ce969be8cd07-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.878159 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c59a53f-9c24-43c7-b43f-ce969be8cd07-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.878202 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c59a53f-9c24-43c7-b43f-ce969be8cd07-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.878237 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c59a53f-9c24-43c7-b43f-ce969be8cd07-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.878509 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c59a53f-9c24-43c7-b43f-ce969be8cd07-service-ca\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.979896 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c59a53f-9c24-43c7-b43f-ce969be8cd07-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.979970 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c59a53f-9c24-43c7-b43f-ce969be8cd07-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.980025 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c59a53f-9c24-43c7-b43f-ce969be8cd07-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.980068 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c59a53f-9c24-43c7-b43f-ce969be8cd07-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.980260 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c59a53f-9c24-43c7-b43f-ce969be8cd07-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.980598 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c59a53f-9c24-43c7-b43f-ce969be8cd07-service-ca\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.980661 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c59a53f-9c24-43c7-b43f-ce969be8cd07-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.982279 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c59a53f-9c24-43c7-b43f-ce969be8cd07-service-ca\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:29 crc kubenswrapper[4959]: I0121 13:10:29.995808 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c59a53f-9c24-43c7-b43f-ce969be8cd07-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.007819 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c59a53f-9c24-43c7-b43f-ce969be8cd07-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-c26pm\" (UID: \"4c59a53f-9c24-43c7-b43f-ce969be8cd07\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.151051 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.285353 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:30 crc kubenswrapper[4959]: E0121 13:10:30.285489 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.285817 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:30 crc kubenswrapper[4959]: E0121 13:10:30.286306 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.286601 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:10:30 crc kubenswrapper[4959]: E0121 13:10:30.286783 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.291150 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:33:40.096915766 +0000 UTC Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.291241 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.306601 4959 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.936694 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" event={"ID":"4c59a53f-9c24-43c7-b43f-ce969be8cd07","Type":"ContainerStarted","Data":"126bb1eadb8b7502fa0624d410f894caccfb87ce725ecb3c36fae576b80954b1"} Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.937267 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" event={"ID":"4c59a53f-9c24-43c7-b43f-ce969be8cd07","Type":"ContainerStarted","Data":"eef1db7292713046da806c291967b877293f4c2fbb8e2fb3d46972e211d8b6c0"} Jan 21 13:10:30 crc kubenswrapper[4959]: I0121 13:10:30.959337 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-c26pm" podStartSLOduration=73.959314969 podStartE2EDuration="1m13.959314969s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:10:30.959231336 +0000 UTC m=+91.922261979" watchObservedRunningTime="2026-01-21 13:10:30.959314969 +0000 UTC m=+91.922345512" Jan 21 13:10:31 crc kubenswrapper[4959]: I0121 13:10:31.286210 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:31 crc kubenswrapper[4959]: I0121 13:10:31.286260 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:31 crc kubenswrapper[4959]: E0121 13:10:31.286354 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:31 crc kubenswrapper[4959]: E0121 13:10:31.286455 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:32 crc kubenswrapper[4959]: I0121 13:10:32.285299 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:32 crc kubenswrapper[4959]: I0121 13:10:32.285365 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:32 crc kubenswrapper[4959]: E0121 13:10:32.285482 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:32 crc kubenswrapper[4959]: E0121 13:10:32.285659 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:33 crc kubenswrapper[4959]: I0121 13:10:33.285615 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:33 crc kubenswrapper[4959]: I0121 13:10:33.285615 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:33 crc kubenswrapper[4959]: E0121 13:10:33.287514 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:33 crc kubenswrapper[4959]: E0121 13:10:33.287739 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:34 crc kubenswrapper[4959]: I0121 13:10:34.285709 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:34 crc kubenswrapper[4959]: I0121 13:10:34.285717 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:34 crc kubenswrapper[4959]: E0121 13:10:34.287009 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:34 crc kubenswrapper[4959]: E0121 13:10:34.287161 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:35 crc kubenswrapper[4959]: I0121 13:10:35.285789 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:35 crc kubenswrapper[4959]: I0121 13:10:35.285838 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:35 crc kubenswrapper[4959]: E0121 13:10:35.286062 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:35 crc kubenswrapper[4959]: E0121 13:10:35.286237 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:36 crc kubenswrapper[4959]: I0121 13:10:36.055545 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:36 crc kubenswrapper[4959]: E0121 13:10:36.055748 4959 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:10:36 crc kubenswrapper[4959]: E0121 13:10:36.055850 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs podName:2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585 nodeName:}" failed. No retries permitted until 2026-01-21 13:11:40.055821477 +0000 UTC m=+161.018852020 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs") pod "network-metrics-daemon-6mzgn" (UID: "2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:10:36 crc kubenswrapper[4959]: I0121 13:10:36.285652 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:36 crc kubenswrapper[4959]: I0121 13:10:36.285700 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:36 crc kubenswrapper[4959]: E0121 13:10:36.285816 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:36 crc kubenswrapper[4959]: E0121 13:10:36.285935 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:37 crc kubenswrapper[4959]: I0121 13:10:37.286200 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:37 crc kubenswrapper[4959]: I0121 13:10:37.286202 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:37 crc kubenswrapper[4959]: E0121 13:10:37.286395 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:37 crc kubenswrapper[4959]: E0121 13:10:37.286502 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:38 crc kubenswrapper[4959]: I0121 13:10:38.285873 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:38 crc kubenswrapper[4959]: I0121 13:10:38.285919 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:38 crc kubenswrapper[4959]: E0121 13:10:38.286082 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:38 crc kubenswrapper[4959]: E0121 13:10:38.286193 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:39 crc kubenswrapper[4959]: I0121 13:10:39.285996 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:39 crc kubenswrapper[4959]: I0121 13:10:39.286037 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:39 crc kubenswrapper[4959]: E0121 13:10:39.287275 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:39 crc kubenswrapper[4959]: E0121 13:10:39.287412 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:40 crc kubenswrapper[4959]: I0121 13:10:40.285203 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:40 crc kubenswrapper[4959]: I0121 13:10:40.285266 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:40 crc kubenswrapper[4959]: E0121 13:10:40.285375 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:40 crc kubenswrapper[4959]: E0121 13:10:40.285522 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:41 crc kubenswrapper[4959]: I0121 13:10:41.285813 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:41 crc kubenswrapper[4959]: E0121 13:10:41.285998 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:41 crc kubenswrapper[4959]: I0121 13:10:41.286168 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:41 crc kubenswrapper[4959]: E0121 13:10:41.286511 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:42 crc kubenswrapper[4959]: I0121 13:10:42.285217 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:42 crc kubenswrapper[4959]: I0121 13:10:42.285241 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:42 crc kubenswrapper[4959]: E0121 13:10:42.285490 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:42 crc kubenswrapper[4959]: E0121 13:10:42.285634 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:43 crc kubenswrapper[4959]: I0121 13:10:43.286431 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:43 crc kubenswrapper[4959]: E0121 13:10:43.286670 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:43 crc kubenswrapper[4959]: I0121 13:10:43.287163 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:43 crc kubenswrapper[4959]: E0121 13:10:43.287300 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:44 crc kubenswrapper[4959]: I0121 13:10:44.285349 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:44 crc kubenswrapper[4959]: I0121 13:10:44.285470 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:44 crc kubenswrapper[4959]: E0121 13:10:44.286174 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:44 crc kubenswrapper[4959]: I0121 13:10:44.286422 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:10:44 crc kubenswrapper[4959]: E0121 13:10:44.286601 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:44 crc kubenswrapper[4959]: E0121 13:10:44.286760 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x7k8s_openshift-ovn-kubernetes(eea635fd-8d4a-4b77-bb58-3d778f59c79e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" Jan 21 13:10:45 crc kubenswrapper[4959]: I0121 13:10:45.285653 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:45 crc kubenswrapper[4959]: I0121 13:10:45.285755 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:45 crc kubenswrapper[4959]: E0121 13:10:45.285845 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:45 crc kubenswrapper[4959]: E0121 13:10:45.285995 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:46 crc kubenswrapper[4959]: I0121 13:10:46.285917 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:46 crc kubenswrapper[4959]: E0121 13:10:46.286036 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:46 crc kubenswrapper[4959]: I0121 13:10:46.285917 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:46 crc kubenswrapper[4959]: E0121 13:10:46.286233 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:47 crc kubenswrapper[4959]: I0121 13:10:47.285901 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:47 crc kubenswrapper[4959]: I0121 13:10:47.285982 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:47 crc kubenswrapper[4959]: E0121 13:10:47.286072 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:47 crc kubenswrapper[4959]: E0121 13:10:47.286197 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:48 crc kubenswrapper[4959]: I0121 13:10:48.285398 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:48 crc kubenswrapper[4959]: E0121 13:10:48.285517 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:48 crc kubenswrapper[4959]: I0121 13:10:48.285606 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:48 crc kubenswrapper[4959]: E0121 13:10:48.285834 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:49 crc kubenswrapper[4959]: I0121 13:10:49.285479 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:49 crc kubenswrapper[4959]: I0121 13:10:49.289905 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:49 crc kubenswrapper[4959]: E0121 13:10:49.290206 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:49 crc kubenswrapper[4959]: E0121 13:10:49.290978 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:50 crc kubenswrapper[4959]: I0121 13:10:50.286041 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:50 crc kubenswrapper[4959]: I0121 13:10:50.286182 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:50 crc kubenswrapper[4959]: E0121 13:10:50.286293 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:50 crc kubenswrapper[4959]: E0121 13:10:50.286436 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.015651 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/1.log" Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.016273 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/0.log" Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.016324 4959 generic.go:334] "Generic (PLEG): container finished" podID="867d68b2-3803-46b0-b974-62ec7ee89b49" containerID="4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031" exitCode=1 Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.016364 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerDied","Data":"4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031"} Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.016410 4959 scope.go:117] "RemoveContainer" containerID="7f008d48061ed7abb1346cfc099a13d30bee7a6e5a50fd027b044ee7b65fc0fe" Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.016934 4959 scope.go:117] "RemoveContainer" containerID="4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031" Jan 21 13:10:51 crc kubenswrapper[4959]: E0121 13:10:51.017175 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-w5zw9_openshift-multus(867d68b2-3803-46b0-b974-62ec7ee89b49)\"" pod="openshift-multus/multus-w5zw9" podUID="867d68b2-3803-46b0-b974-62ec7ee89b49" Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.286233 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:51 crc kubenswrapper[4959]: I0121 13:10:51.286356 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:51 crc kubenswrapper[4959]: E0121 13:10:51.286438 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:51 crc kubenswrapper[4959]: E0121 13:10:51.286892 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:52 crc kubenswrapper[4959]: I0121 13:10:52.021044 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/1.log" Jan 21 13:10:52 crc kubenswrapper[4959]: I0121 13:10:52.285421 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:52 crc kubenswrapper[4959]: I0121 13:10:52.285482 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:52 crc kubenswrapper[4959]: E0121 13:10:52.285585 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:52 crc kubenswrapper[4959]: E0121 13:10:52.285652 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:53 crc kubenswrapper[4959]: I0121 13:10:53.285359 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:53 crc kubenswrapper[4959]: I0121 13:10:53.285414 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:53 crc kubenswrapper[4959]: E0121 13:10:53.285559 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:53 crc kubenswrapper[4959]: E0121 13:10:53.285718 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:54 crc kubenswrapper[4959]: I0121 13:10:54.285483 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:54 crc kubenswrapper[4959]: I0121 13:10:54.285592 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:54 crc kubenswrapper[4959]: E0121 13:10:54.285698 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:54 crc kubenswrapper[4959]: E0121 13:10:54.285867 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:55 crc kubenswrapper[4959]: I0121 13:10:55.285214 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:55 crc kubenswrapper[4959]: I0121 13:10:55.285374 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:55 crc kubenswrapper[4959]: E0121 13:10:55.285491 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:55 crc kubenswrapper[4959]: E0121 13:10:55.285595 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:56 crc kubenswrapper[4959]: I0121 13:10:56.286058 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:56 crc kubenswrapper[4959]: I0121 13:10:56.286160 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:56 crc kubenswrapper[4959]: E0121 13:10:56.286259 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:56 crc kubenswrapper[4959]: E0121 13:10:56.286397 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:57 crc kubenswrapper[4959]: I0121 13:10:57.286282 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:57 crc kubenswrapper[4959]: E0121 13:10:57.286498 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:57 crc kubenswrapper[4959]: I0121 13:10:57.286622 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:57 crc kubenswrapper[4959]: E0121 13:10:57.286779 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:58 crc kubenswrapper[4959]: I0121 13:10:58.286160 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:10:58 crc kubenswrapper[4959]: I0121 13:10:58.286234 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:10:58 crc kubenswrapper[4959]: E0121 13:10:58.286374 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:10:58 crc kubenswrapper[4959]: E0121 13:10:58.286501 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:10:59 crc kubenswrapper[4959]: E0121 13:10:59.271087 4959 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 13:10:59 crc kubenswrapper[4959]: I0121 13:10:59.286079 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:10:59 crc kubenswrapper[4959]: I0121 13:10:59.286192 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:10:59 crc kubenswrapper[4959]: E0121 13:10:59.287672 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:10:59 crc kubenswrapper[4959]: E0121 13:10:59.287924 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:10:59 crc kubenswrapper[4959]: I0121 13:10:59.288088 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:10:59 crc kubenswrapper[4959]: E0121 13:10:59.391552 4959 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.052768 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/3.log" Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.055846 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerStarted","Data":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.056335 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.220180 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podStartSLOduration=103.220150958 podStartE2EDuration="1m43.220150958s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:00.08485066 +0000 UTC m=+121.047881203" watchObservedRunningTime="2026-01-21 13:11:00.220150958 +0000 UTC m=+121.183181511" Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.220651 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6mzgn"] Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.220797 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:00 crc kubenswrapper[4959]: E0121 13:11:00.220921 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:11:00 crc kubenswrapper[4959]: I0121 13:11:00.286394 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:00 crc kubenswrapper[4959]: E0121 13:11:00.286596 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:11:01 crc kubenswrapper[4959]: I0121 13:11:01.285579 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:01 crc kubenswrapper[4959]: I0121 13:11:01.285611 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:01 crc kubenswrapper[4959]: E0121 13:11:01.286351 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:11:01 crc kubenswrapper[4959]: I0121 13:11:01.285902 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:01 crc kubenswrapper[4959]: E0121 13:11:01.286529 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:11:01 crc kubenswrapper[4959]: E0121 13:11:01.286633 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:11:02 crc kubenswrapper[4959]: I0121 13:11:02.285426 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:02 crc kubenswrapper[4959]: E0121 13:11:02.285639 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:11:03 crc kubenswrapper[4959]: I0121 13:11:03.285884 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:03 crc kubenswrapper[4959]: I0121 13:11:03.285917 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:03 crc kubenswrapper[4959]: I0121 13:11:03.286066 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:03 crc kubenswrapper[4959]: E0121 13:11:03.286192 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:11:03 crc kubenswrapper[4959]: E0121 13:11:03.286363 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:11:03 crc kubenswrapper[4959]: E0121 13:11:03.286392 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:11:04 crc kubenswrapper[4959]: I0121 13:11:04.285361 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:04 crc kubenswrapper[4959]: E0121 13:11:04.285595 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:11:04 crc kubenswrapper[4959]: E0121 13:11:04.393775 4959 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:11:05 crc kubenswrapper[4959]: I0121 13:11:05.285404 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:05 crc kubenswrapper[4959]: I0121 13:11:05.285509 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:05 crc kubenswrapper[4959]: E0121 13:11:05.285585 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:11:05 crc kubenswrapper[4959]: E0121 13:11:05.285748 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:11:05 crc kubenswrapper[4959]: I0121 13:11:05.285816 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:05 crc kubenswrapper[4959]: E0121 13:11:05.286460 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:11:05 crc kubenswrapper[4959]: I0121 13:11:05.286563 4959 scope.go:117] "RemoveContainer" containerID="4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031" Jan 21 13:11:06 crc kubenswrapper[4959]: I0121 13:11:06.084577 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/1.log" Jan 21 13:11:06 crc kubenswrapper[4959]: I0121 13:11:06.085196 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerStarted","Data":"a5cbffcf6d5315d0e71ca2cebcbf2ad03cf0607b739d2eb490068d3e9f9e5ed2"} Jan 21 13:11:06 crc kubenswrapper[4959]: I0121 13:11:06.286038 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:06 crc kubenswrapper[4959]: E0121 13:11:06.286260 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:11:07 crc kubenswrapper[4959]: I0121 13:11:07.286205 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:07 crc kubenswrapper[4959]: I0121 13:11:07.286198 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:07 crc kubenswrapper[4959]: I0121 13:11:07.286409 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:07 crc kubenswrapper[4959]: E0121 13:11:07.286623 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:11:07 crc kubenswrapper[4959]: E0121 13:11:07.286816 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:11:07 crc kubenswrapper[4959]: E0121 13:11:07.287350 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:11:08 crc kubenswrapper[4959]: I0121 13:11:08.285593 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:08 crc kubenswrapper[4959]: E0121 13:11:08.285751 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:11:09 crc kubenswrapper[4959]: I0121 13:11:09.285685 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:09 crc kubenswrapper[4959]: I0121 13:11:09.285700 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:09 crc kubenswrapper[4959]: I0121 13:11:09.288190 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:09 crc kubenswrapper[4959]: E0121 13:11:09.288290 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:11:09 crc kubenswrapper[4959]: E0121 13:11:09.288368 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6mzgn" podUID="2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585" Jan 21 13:11:09 crc kubenswrapper[4959]: E0121 13:11:09.288454 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.285993 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.289154 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.290668 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.567432 4959 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.617399 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.618338 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d8t58"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.618788 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.619447 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.622629 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tlw47"] Jan 21 13:11:10 crc kubenswrapper[4959]: W0121 13:11:10.623699 4959 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 21 13:11:10 crc kubenswrapper[4959]: E0121 13:11:10.623793 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.624567 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 13:11:10 crc kubenswrapper[4959]: W0121 13:11:10.627956 4959 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 13:11:10 crc kubenswrapper[4959]: E0121 13:11:10.628058 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.629764 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: W0121 13:11:10.629947 4959 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 13:11:10 crc kubenswrapper[4959]: E0121 13:11:10.630060 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:10 crc kubenswrapper[4959]: W0121 13:11:10.630624 4959 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 13:11:10 crc kubenswrapper[4959]: E0121 13:11:10.642244 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.642512 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.642530 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 13:11:10 crc kubenswrapper[4959]: W0121 13:11:10.643037 4959 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.643056 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:11:10 crc kubenswrapper[4959]: E0121 13:11:10.643081 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.643177 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: W0121 13:11:10.643409 4959 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.643666 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd"] Jan 21 13:11:10 crc kubenswrapper[4959]: E0121 13:11:10.643553 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.644174 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.648188 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q5vmq"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.648616 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.648772 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.648889 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.649002 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.649510 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.649568 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5qxvm"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.650182 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.649012 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.649440 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.650787 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.651549 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7pb5k"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.651907 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pfbvr"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.655754 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.656287 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.656387 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.656285 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.656585 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.657083 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rvj6z"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.657560 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.657959 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.662630 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.663001 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.667350 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.667907 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668005 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668171 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668430 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668503 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668642 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668787 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668881 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.668952 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669080 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669305 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669386 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669511 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669557 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669661 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669718 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669857 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669971 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670146 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669524 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670338 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.669663 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670458 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670527 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670542 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670627 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670671 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670682 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670297 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670782 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670853 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670877 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670997 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671064 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670459 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671182 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671275 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671384 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671410 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.670999 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671561 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671644 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.671717 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xlvm4"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.672803 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.672885 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sxbb8"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.673553 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.675845 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.676868 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.679005 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.687598 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.687790 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.687899 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.688016 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.693232 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.694110 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.694957 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.695238 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.695815 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hvsqs"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697648 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-trusted-ca-bundle\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697710 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8fbacbf-6d70-4d37-a123-30151512cf5f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697778 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2f6\" (UniqueName: \"kubernetes.io/projected/0bd68930-fc78-45b1-b297-b60b53ad6823-kube-api-access-4x2f6\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697828 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-oauth-serving-cert\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697861 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-config\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697895 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697927 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhbr\" (UniqueName: \"kubernetes.io/projected/e8fbacbf-6d70-4d37-a123-30151512cf5f-kube-api-access-8xhbr\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697955 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tz97\" (UniqueName: \"kubernetes.io/projected/fc248d84-5152-4675-9b2b-596ba0b2dc7c-kube-api-access-5tz97\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.697989 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698021 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a29e505-0841-4b13-9f9b-3ad6984bc580-audit-dir\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698057 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knd2c\" (UniqueName: \"kubernetes.io/projected/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-kube-api-access-knd2c\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698083 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc248d84-5152-4675-9b2b-596ba0b2dc7c-serving-cert\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698193 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-audit\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698497 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-serving-cert\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698601 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/575852dc-3cb2-4a05-8fd0-aad5aef44b92-config\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698725 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-etcd-client\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698750 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-service-ca\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698781 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-config\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698807 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-config\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698829 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-encryption-config\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698850 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698881 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-client-ca\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698915 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-etcd-serving-ca\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.698971 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699014 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-ca\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699046 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-client\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699078 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-serving-cert\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699123 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhst2\" (UniqueName: \"kubernetes.io/projected/7a29e505-0841-4b13-9f9b-3ad6984bc580-kube-api-access-fhst2\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699151 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd68930-fc78-45b1-b297-b60b53ad6823-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699179 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-serving-cert\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699206 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/575852dc-3cb2-4a05-8fd0-aad5aef44b92-trusted-ca\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699226 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhw5n\" (UniqueName: \"kubernetes.io/projected/575852dc-3cb2-4a05-8fd0-aad5aef44b92-kube-api-access-vhw5n\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699251 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgsr\" (UniqueName: \"kubernetes.io/projected/a3221620-4989-4bff-8cfc-19da6a21a2da-kube-api-access-xzgsr\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699291 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699318 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-config\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699340 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8fbacbf-6d70-4d37-a123-30151512cf5f-images\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699357 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-config\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699378 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699407 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe969621-d2d3-4af2-b0ff-28657f978ca4-serving-cert\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699427 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f99vm\" (UniqueName: \"kubernetes.io/projected/fe969621-d2d3-4af2-b0ff-28657f978ca4-kube-api-access-f99vm\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699449 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a29e505-0841-4b13-9f9b-3ad6984bc580-node-pullsecrets\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699470 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/575852dc-3cb2-4a05-8fd0-aad5aef44b92-serving-cert\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699526 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699553 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-image-import-ca\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699574 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7ft\" (UniqueName: \"kubernetes.io/projected/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-kube-api-access-cw7ft\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699724 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd68930-fc78-45b1-b297-b60b53ad6823-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699757 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjhl\" (UniqueName: \"kubernetes.io/projected/a8697566-9d27-4d19-be54-2c5307ab5962-kube-api-access-jfjhl\") pod \"downloads-7954f5f757-7pb5k\" (UID: \"a8697566-9d27-4d19-be54-2c5307ab5962\") " pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.699886 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbacbf-6d70-4d37-a123-30151512cf5f-config\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.700892 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-oauth-config\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.700929 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6x2\" (UniqueName: \"kubernetes.io/projected/277cb73f-7c9e-46e0-bb04-4baea31ec998-kube-api-access-xt6x2\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.701920 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.702038 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.701918 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.703657 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.703761 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.703677 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.705042 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.718308 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.718652 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.718867 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.718884 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.719428 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.719490 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.719564 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.719653 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.719877 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.720278 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.720525 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.721314 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.721852 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.721861 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.724554 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.724615 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.724811 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.724566 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.727326 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.727938 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.729834 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.730344 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.731030 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.731976 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.732237 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.732515 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.732548 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.732762 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.733290 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.733501 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.733736 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.733810 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.734114 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.734322 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.734472 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.734975 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.735178 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.736575 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.737480 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.737969 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.738444 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.743663 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.753113 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.758000 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.762548 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.763594 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.769811 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.770864 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xgxkj"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.771116 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.771853 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.772031 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.776405 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.776430 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.776668 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.778086 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.778236 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.779484 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.783192 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.793952 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.794482 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.796454 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hn2rm"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.797033 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.797070 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.797866 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804291 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a29e505-0841-4b13-9f9b-3ad6984bc580-audit-dir\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804326 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knd2c\" (UniqueName: \"kubernetes.io/projected/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-kube-api-access-knd2c\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804349 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc248d84-5152-4675-9b2b-596ba0b2dc7c-serving-cert\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804370 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-audit\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804386 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-serving-cert\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804401 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/575852dc-3cb2-4a05-8fd0-aad5aef44b92-config\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804418 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-service-ca\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804431 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-etcd-client\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804458 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124c92ca-d749-4da9-afe2-f7002b29f983-serving-cert\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804474 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-service-ca-bundle\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804493 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-config\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804510 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-config\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804526 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-encryption-config\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804542 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804557 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-client-ca\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804581 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-etcd-serving-ca\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804598 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804616 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-ca\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804633 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-serving-cert\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804648 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-client\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804667 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-serving-cert\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804683 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhst2\" (UniqueName: \"kubernetes.io/projected/7a29e505-0841-4b13-9f9b-3ad6984bc580-kube-api-access-fhst2\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804700 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd68930-fc78-45b1-b297-b60b53ad6823-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804721 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw7fk\" (UniqueName: \"kubernetes.io/projected/124c92ca-d749-4da9-afe2-f7002b29f983-kube-api-access-rw7fk\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804744 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgsr\" (UniqueName: \"kubernetes.io/projected/a3221620-4989-4bff-8cfc-19da6a21a2da-kube-api-access-xzgsr\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804760 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/575852dc-3cb2-4a05-8fd0-aad5aef44b92-trusted-ca\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804777 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhw5n\" (UniqueName: \"kubernetes.io/projected/575852dc-3cb2-4a05-8fd0-aad5aef44b92-kube-api-access-vhw5n\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804794 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-config\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804811 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804836 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804855 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8fbacbf-6d70-4d37-a123-30151512cf5f-images\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804873 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-config\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804895 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe969621-d2d3-4af2-b0ff-28657f978ca4-serving-cert\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804910 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f99vm\" (UniqueName: \"kubernetes.io/projected/fe969621-d2d3-4af2-b0ff-28657f978ca4-kube-api-access-f99vm\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804933 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a29e505-0841-4b13-9f9b-3ad6984bc580-node-pullsecrets\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804948 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/575852dc-3cb2-4a05-8fd0-aad5aef44b92-serving-cert\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804965 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804979 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-image-import-ca\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.804996 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7ft\" (UniqueName: \"kubernetes.io/projected/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-kube-api-access-cw7ft\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805011 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd68930-fc78-45b1-b297-b60b53ad6823-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805028 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-config\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805057 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjhl\" (UniqueName: \"kubernetes.io/projected/a8697566-9d27-4d19-be54-2c5307ab5962-kube-api-access-jfjhl\") pod \"downloads-7954f5f757-7pb5k\" (UID: \"a8697566-9d27-4d19-be54-2c5307ab5962\") " pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805083 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-oauth-config\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805113 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6x2\" (UniqueName: \"kubernetes.io/projected/277cb73f-7c9e-46e0-bb04-4baea31ec998-kube-api-access-xt6x2\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805128 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbacbf-6d70-4d37-a123-30151512cf5f-config\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805153 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-trusted-ca-bundle\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805170 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8fbacbf-6d70-4d37-a123-30151512cf5f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805186 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2f6\" (UniqueName: \"kubernetes.io/projected/0bd68930-fc78-45b1-b297-b60b53ad6823-kube-api-access-4x2f6\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805204 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-oauth-serving-cert\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805221 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-config\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805237 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805253 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805273 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xhbr\" (UniqueName: \"kubernetes.io/projected/e8fbacbf-6d70-4d37-a123-30151512cf5f-kube-api-access-8xhbr\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805289 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tz97\" (UniqueName: \"kubernetes.io/projected/fc248d84-5152-4675-9b2b-596ba0b2dc7c-kube-api-access-5tz97\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805304 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.805427 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a29e505-0841-4b13-9f9b-3ad6984bc580-audit-dir\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.806538 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.807002 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.807316 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-audit\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.808146 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-service-ca\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.808157 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-etcd-serving-ca\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.808487 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd68930-fc78-45b1-b297-b60b53ad6823-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.808632 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-ca\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.809845 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/575852dc-3cb2-4a05-8fd0-aad5aef44b92-trusted-ca\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.811507 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/575852dc-3cb2-4a05-8fd0-aad5aef44b92-config\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.812116 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-service-ca\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.815845 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8fbacbf-6d70-4d37-a123-30151512cf5f-images\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.816447 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-config\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.817957 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-config\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.818924 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-client-ca\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.820371 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.821150 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-image-import-ca\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.821302 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b7ws8"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.821825 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-serving-cert\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.822038 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.822702 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-oauth-serving-cert\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.822816 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.823301 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a29e505-0841-4b13-9f9b-3ad6984bc580-node-pullsecrets\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.823697 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbacbf-6d70-4d37-a123-30151512cf5f-config\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.823754 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a29e505-0841-4b13-9f9b-3ad6984bc580-config\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.825676 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-trusted-ca-bundle\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.825824 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-config\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.826208 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe969621-d2d3-4af2-b0ff-28657f978ca4-config\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.826982 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.827549 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.828327 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe969621-d2d3-4af2-b0ff-28657f978ca4-etcd-client\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.830056 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe969621-d2d3-4af2-b0ff-28657f978ca4-serving-cert\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.830825 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-oauth-config\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.834569 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8fbacbf-6d70-4d37-a123-30151512cf5f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.834945 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-encryption-config\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.835246 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc248d84-5152-4675-9b2b-596ba0b2dc7c-serving-cert\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.835661 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-serving-cert\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.835689 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a29e505-0841-4b13-9f9b-3ad6984bc580-etcd-client\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.836939 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/575852dc-3cb2-4a05-8fd0-aad5aef44b92-serving-cert\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.837437 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr76d"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.838285 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pm5c8"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.838350 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.838713 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.838818 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.839213 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.839235 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-j4l2m"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.839238 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.839380 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.840045 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd68930-fc78-45b1-b297-b60b53ad6823-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.840631 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.840664 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.840680 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d8t58"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.840690 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.840747 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.842733 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.844475 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.844873 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.846733 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-serving-cert\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.846797 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.850807 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.851786 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.852542 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pfbvr"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.853993 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q5vmq"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.858379 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.860307 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xlvm4"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.861737 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tlw47"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.863642 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sxbb8"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.866019 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2xdfx"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.866318 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.866756 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.866960 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-v47ps"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.867823 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.868048 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7pb5k"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.870222 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.871152 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.872847 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pm5c8"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.874980 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.876961 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.878596 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.882082 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.883186 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hvsqs"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.886916 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2xdfx"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.890892 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.892691 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.894124 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xgxkj"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.895689 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rvj6z"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.897122 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.898483 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.900160 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5qxvm"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.901461 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.902617 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.902814 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j4l2m"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.904316 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b7ws8"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.905780 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr76d"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.906054 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124c92ca-d749-4da9-afe2-f7002b29f983-serving-cert\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.906082 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-service-ca-bundle\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.906153 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw7fk\" (UniqueName: \"kubernetes.io/projected/124c92ca-d749-4da9-afe2-f7002b29f983-kube-api-access-rw7fk\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.906221 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-config\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.906297 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.907220 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-config\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.907283 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.907368 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-service-ca-bundle\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.907446 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124c92ca-d749-4da9-afe2-f7002b29f983-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.908675 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.909627 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124c92ca-d749-4da9-afe2-f7002b29f983-serving-cert\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.909841 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h5nps"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.911384 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h5nps"] Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.911485 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.922399 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.942258 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.963490 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 13:11:10 crc kubenswrapper[4959]: I0121 13:11:10.982105 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.002774 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.021647 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.041885 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.062077 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.082858 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.101358 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.122066 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.141634 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.168909 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.181505 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.201301 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.222483 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.242364 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.263544 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.281615 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.286061 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.286061 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.286328 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.302296 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.323043 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.341874 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.362545 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.382741 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.402275 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.422210 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.441703 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.461988 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.481700 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.522742 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.542713 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.562792 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.582271 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.602246 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.623320 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.642927 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.661883 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.681904 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.702453 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.752158 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.762601 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.780366 4959 request.go:700] Waited for 1.001655971s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.783496 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.802835 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 13:11:11 crc kubenswrapper[4959]: E0121 13:11:11.806272 4959 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:11 crc kubenswrapper[4959]: E0121 13:11:11.806488 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles podName:a3221620-4989-4bff-8cfc-19da6a21a2da nodeName:}" failed. No retries permitted until 2026-01-21 13:11:12.306425854 +0000 UTC m=+133.269456437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles") pod "controller-manager-879f6c89f-d8t58" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:11 crc kubenswrapper[4959]: E0121 13:11:11.820044 4959 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 13:11:11 crc kubenswrapper[4959]: E0121 13:11:11.820286 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert podName:a3221620-4989-4bff-8cfc-19da6a21a2da nodeName:}" failed. No retries permitted until 2026-01-21 13:11:12.320197197 +0000 UTC m=+133.283227770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert") pod "controller-manager-879f6c89f-d8t58" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.821964 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 13:11:11 crc kubenswrapper[4959]: E0121 13:11:11.823846 4959 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:11 crc kubenswrapper[4959]: E0121 13:11:11.824008 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca podName:a3221620-4989-4bff-8cfc-19da6a21a2da nodeName:}" failed. No retries permitted until 2026-01-21 13:11:12.323988919 +0000 UTC m=+133.287019502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca") pod "controller-manager-879f6c89f-d8t58" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.842355 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.862022 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.882994 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.903253 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.922128 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.942393 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.963490 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 13:11:11 crc kubenswrapper[4959]: I0121 13:11:11.983683 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.002333 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.023169 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.043067 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.063025 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.126340 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjhl\" (UniqueName: \"kubernetes.io/projected/a8697566-9d27-4d19-be54-2c5307ab5962-kube-api-access-jfjhl\") pod \"downloads-7954f5f757-7pb5k\" (UID: \"a8697566-9d27-4d19-be54-2c5307ab5962\") " pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.147880 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhst2\" (UniqueName: \"kubernetes.io/projected/7a29e505-0841-4b13-9f9b-3ad6984bc580-kube-api-access-fhst2\") pod \"apiserver-76f77b778f-q5vmq\" (UID: \"7a29e505-0841-4b13-9f9b-3ad6984bc580\") " pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.189282 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhw5n\" (UniqueName: \"kubernetes.io/projected/575852dc-3cb2-4a05-8fd0-aad5aef44b92-kube-api-access-vhw5n\") pod \"console-operator-58897d9998-rvj6z\" (UID: \"575852dc-3cb2-4a05-8fd0-aad5aef44b92\") " pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.197281 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7ft\" (UniqueName: \"kubernetes.io/projected/1f01767f-8e58-40cf-a88d-91ffea7c6b4a-kube-api-access-cw7ft\") pod \"openshift-config-operator-7777fb866f-pfc9p\" (UID: \"1f01767f-8e58-40cf-a88d-91ffea7c6b4a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.221771 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6x2\" (UniqueName: \"kubernetes.io/projected/277cb73f-7c9e-46e0-bb04-4baea31ec998-kube-api-access-xt6x2\") pod \"console-f9d7485db-5qxvm\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.236640 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.238198 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xhbr\" (UniqueName: \"kubernetes.io/projected/e8fbacbf-6d70-4d37-a123-30151512cf5f-kube-api-access-8xhbr\") pod \"machine-api-operator-5694c8668f-tlw47\" (UID: \"e8fbacbf-6d70-4d37-a123-30151512cf5f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.243179 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.255316 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.258811 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tz97\" (UniqueName: \"kubernetes.io/projected/fc248d84-5152-4675-9b2b-596ba0b2dc7c-kube-api-access-5tz97\") pod \"route-controller-manager-6576b87f9c-hl7gd\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.280674 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f99vm\" (UniqueName: \"kubernetes.io/projected/fe969621-d2d3-4af2-b0ff-28657f978ca4-kube-api-access-f99vm\") pod \"etcd-operator-b45778765-pfbvr\" (UID: \"fe969621-d2d3-4af2-b0ff-28657f978ca4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.297769 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2f6\" (UniqueName: \"kubernetes.io/projected/0bd68930-fc78-45b1-b297-b60b53ad6823-kube-api-access-4x2f6\") pod \"openshift-controller-manager-operator-756b6f6bc6-rgnqt\" (UID: \"0bd68930-fc78-45b1-b297-b60b53ad6823\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.299415 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.301681 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.303861 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.324968 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.327627 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.327702 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.327732 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.346490 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.351904 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.364612 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.382403 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.404067 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.425289 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.445039 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.462135 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p"] Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.463506 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 13:11:12 crc kubenswrapper[4959]: W0121 13:11:12.476564 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f01767f_8e58_40cf_a88d_91ffea7c6b4a.slice/crio-7f2ad5244ca3b78b2cfe4050e9867ed7be3fbdf25e89725d0eb94e5f7c6469af WatchSource:0}: Error finding container 7f2ad5244ca3b78b2cfe4050e9867ed7be3fbdf25e89725d0eb94e5f7c6469af: Status 404 returned error can't find the container with id 7f2ad5244ca3b78b2cfe4050e9867ed7be3fbdf25e89725d0eb94e5f7c6469af Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.489585 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.504397 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.506321 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.526637 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.529078 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.544149 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.544268 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rvj6z"] Jan 21 13:11:12 crc kubenswrapper[4959]: W0121 13:11:12.555230 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod575852dc_3cb2_4a05_8fd0_aad5aef44b92.slice/crio-10abafe90934ab643d00a9d17d1cfbc5b201a249ac41bcce33961f75ff31b5d2 WatchSource:0}: Error finding container 10abafe90934ab643d00a9d17d1cfbc5b201a249ac41bcce33961f75ff31b5d2: Status 404 returned error can't find the container with id 10abafe90934ab643d00a9d17d1cfbc5b201a249ac41bcce33961f75ff31b5d2 Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.565080 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.566467 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.571940 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7pb5k"] Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.582867 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 13:11:12 crc kubenswrapper[4959]: W0121 13:11:12.587824 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8697566_9d27_4d19_be54_2c5307ab5962.slice/crio-ce2848c386d0917b822b4bbb557ea663696eca076200707b0e03060b12a774dc WatchSource:0}: Error finding container ce2848c386d0917b822b4bbb557ea663696eca076200707b0e03060b12a774dc: Status 404 returned error can't find the container with id ce2848c386d0917b822b4bbb557ea663696eca076200707b0e03060b12a774dc Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.601577 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.604626 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt"] Jan 21 13:11:12 crc kubenswrapper[4959]: W0121 13:11:12.613519 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd68930_fc78_45b1_b297_b60b53ad6823.slice/crio-5304bc037676d0d5f4a97db207db378b0de5ac65fb4c9cf6aa37b5abebd6ddd0 WatchSource:0}: Error finding container 5304bc037676d0d5f4a97db207db378b0de5ac65fb4c9cf6aa37b5abebd6ddd0: Status 404 returned error can't find the container with id 5304bc037676d0d5f4a97db207db378b0de5ac65fb4c9cf6aa37b5abebd6ddd0 Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.621667 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.642421 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.664437 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.682713 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.702144 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.708740 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q5vmq"] Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.711763 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5qxvm"] Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.721197 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tlw47"] Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.723514 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.741411 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.762124 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.780525 4959 request.go:700] Waited for 1.912450345s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.782306 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.802669 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.818543 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pfbvr"] Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.822314 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.858837 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw7fk\" (UniqueName: \"kubernetes.io/projected/124c92ca-d749-4da9-afe2-f7002b29f983-kube-api-access-rw7fk\") pod \"authentication-operator-69f744f599-hvsqs\" (UID: \"124c92ca-d749-4da9-afe2-f7002b29f983\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.861866 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.881725 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.901920 4959 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.922067 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.942453 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.962847 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.984659 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 13:11:12 crc kubenswrapper[4959]: I0121 13:11:12.991945 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd"] Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.022132 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.025930 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.029472 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.049730 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.059974 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.062036 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.074038 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.101383 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.104649 4959 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.104677 4959 projected.go:194] Error preparing data for projected volume kube-api-access-knd2c for pod openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.104757 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-kube-api-access-knd2c podName:c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.604730116 +0000 UTC m=+134.567760679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-knd2c" (UniqueName: "kubernetes.io/projected/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-kube-api-access-knd2c") pod "cluster-samples-operator-665b6dd947-gr22l" (UID: "c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.114341 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgsr\" (UniqueName: \"kubernetes.io/projected/a3221620-4989-4bff-8cfc-19da6a21a2da-kube-api-access-xzgsr\") pod \"controller-manager-879f6c89f-d8t58\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.115447 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" event={"ID":"0bd68930-fc78-45b1-b297-b60b53ad6823","Type":"ContainerStarted","Data":"5304bc037676d0d5f4a97db207db378b0de5ac65fb4c9cf6aa37b5abebd6ddd0"} Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.126000 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.129624 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" event={"ID":"575852dc-3cb2-4a05-8fd0-aad5aef44b92","Type":"ContainerStarted","Data":"10abafe90934ab643d00a9d17d1cfbc5b201a249ac41bcce33961f75ff31b5d2"} Jan 21 13:11:13 crc kubenswrapper[4959]: W0121 13:11:13.131781 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277cb73f_7c9e_46e0_bb04_4baea31ec998.slice/crio-26dd0e135cf72beabb396d8552b72290a4256fa780a88cab04e6aecb90861f8f WatchSource:0}: Error finding container 26dd0e135cf72beabb396d8552b72290a4256fa780a88cab04e6aecb90861f8f: Status 404 returned error can't find the container with id 26dd0e135cf72beabb396d8552b72290a4256fa780a88cab04e6aecb90861f8f Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.137383 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7pb5k" event={"ID":"a8697566-9d27-4d19-be54-2c5307ab5962","Type":"ContainerStarted","Data":"ce2848c386d0917b822b4bbb557ea663696eca076200707b0e03060b12a774dc"} Jan 21 13:11:13 crc kubenswrapper[4959]: W0121 13:11:13.137523 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe969621_d2d3_4af2_b0ff_28657f978ca4.slice/crio-581fd731d95603df05e511c9edde813c0843b36f16a1ee45ac1ea133bdcec63d WatchSource:0}: Error finding container 581fd731d95603df05e511c9edde813c0843b36f16a1ee45ac1ea133bdcec63d: Status 404 returned error can't find the container with id 581fd731d95603df05e511c9edde813c0843b36f16a1ee45ac1ea133bdcec63d Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.139718 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" event={"ID":"1f01767f-8e58-40cf-a88d-91ffea7c6b4a","Type":"ContainerStarted","Data":"7f2ad5244ca3b78b2cfe4050e9867ed7be3fbdf25e89725d0eb94e5f7c6469af"} Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141432 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1267c86a-bd57-4042-b853-47e17f96d636-profile-collector-cert\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141510 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141562 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141635 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7947311e-2d41-4e52-8b62-e27b635a889a-audit-dir\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141719 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141888 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dacf6074-2418-407e-a3a3-db84f33e1147-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141916 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvz9z\" (UniqueName: \"kubernetes.io/projected/7947311e-2d41-4e52-8b62-e27b635a889a-kube-api-access-dvz9z\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141937 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.141959 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40cbf264-85ae-42cb-bdbc-3548603501bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142012 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/675c0c62-9109-4128-93c9-801f66debbaf-audit-dir\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142076 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxpv\" (UniqueName: \"kubernetes.io/projected/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-kube-api-access-dpxpv\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142244 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-trusted-ca\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142311 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142333 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1267c86a-bd57-4042-b853-47e17f96d636-srv-cert\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142446 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.142530 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/40cbf264-85ae-42cb-bdbc-3548603501bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.143234 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.143721 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.144061 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-audit-policies\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.144924 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-etcd-client\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.145261 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-serving-cert\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.145321 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4z7j\" (UniqueName: \"kubernetes.io/projected/675c0c62-9109-4128-93c9-801f66debbaf-kube-api-access-c4z7j\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: W0121 13:11:13.145488 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc248d84_5152_4675_9b2b_596ba0b2dc7c.slice/crio-c6bf26cf2005fdb939306a0e339d92ed68701bb1f5844c5a8a8c7163fd520771 WatchSource:0}: Error finding container c6bf26cf2005fdb939306a0e339d92ed68701bb1f5844c5a8a8c7163fd520771: Status 404 returned error can't find the container with id c6bf26cf2005fdb939306a0e339d92ed68701bb1f5844c5a8a8c7163fd520771 Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146090 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d74ebb8-a165-44d5-a5cf-17217e03be90-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146291 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-bound-sa-token\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146442 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405c903d-189e-4c88-9dbd-01f12df65580-config\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146554 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacf6074-2418-407e-a3a3-db84f33e1147-config\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146604 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e046011a-da96-4097-bf84-c73160147343-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146734 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-certificates\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146788 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d74ebb8-a165-44d5-a5cf-17217e03be90-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146844 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-encryption-config\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.146944 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147019 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147148 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-config\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147207 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147294 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147384 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/405c903d-189e-4c88-9dbd-01f12df65580-machine-approver-tls\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147423 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbks\" (UniqueName: \"kubernetes.io/projected/e046011a-da96-4097-bf84-c73160147343-kube-api-access-xmbks\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147560 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52lp5\" (UniqueName: \"kubernetes.io/projected/1267c86a-bd57-4042-b853-47e17f96d636-kube-api-access-52lp5\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147637 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c5fb02-af08-481e-a141-649021a5df80-config\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147772 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147849 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.147933 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgqd\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-kube-api-access-5jgqd\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148009 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacf6074-2418-407e-a3a3-db84f33e1147-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148155 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c5fb02-af08-481e-a141-649021a5df80-serving-cert\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148210 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40cbf264-85ae-42cb-bdbc-3548603501bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148281 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-tls\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148337 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpvtn\" (UniqueName: \"kubernetes.io/projected/35c5fb02-af08-481e-a141-649021a5df80-kube-api-access-hpvtn\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148384 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrg2\" (UniqueName: \"kubernetes.io/projected/a73eaf04-cbe4-4af5-b602-935b5a92850c-kube-api-access-nxrg2\") pod \"migrator-59844c95c7-5jj79\" (UID: \"a73eaf04-cbe4-4af5-b602-935b5a92850c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148431 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148477 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148527 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148574 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-audit-policies\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148812 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148859 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvpw\" (UniqueName: \"kubernetes.io/projected/405c903d-189e-4c88-9dbd-01f12df65580-kube-api-access-5bvpw\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.148951 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.149005 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/405c903d-189e-4c88-9dbd-01f12df65580-auth-proxy-config\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.149089 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e046011a-da96-4097-bf84-c73160147343-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.149175 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdk7\" (UniqueName: \"kubernetes.io/projected/40cbf264-85ae-42cb-bdbc-3548603501bf-kube-api-access-rfdk7\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.149299 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.149789 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.649766749 +0000 UTC m=+134.612797322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.249902 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.250222 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.750191217 +0000 UTC m=+134.713221760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250567 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250593 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250628 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/feec37c0-15ae-4bcf-af2c-1c1622f0edd4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8hcw2\" (UID: \"feec37c0-15ae-4bcf-af2c-1c1622f0edd4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250667 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250686 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40cbf264-85ae-42cb-bdbc-3548603501bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250707 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpxpv\" (UniqueName: \"kubernetes.io/projected/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-kube-api-access-dpxpv\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250728 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250764 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250787 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1267c86a-bd57-4042-b853-47e17f96d636-srv-cert\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250811 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vtpb\" (UniqueName: \"kubernetes.io/projected/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-kube-api-access-9vtpb\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250830 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e514cd18-c4ea-4758-8023-08dfdbc87717-proxy-tls\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250852 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/40cbf264-85ae-42cb-bdbc-3548603501bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250876 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250898 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6bb\" (UniqueName: \"kubernetes.io/projected/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-kube-api-access-ft6bb\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250924 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-audit-policies\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250948 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfs5\" (UniqueName: \"kubernetes.io/projected/130ce609-755d-4564-8c8a-3b9038e201bc-kube-api-access-kdfs5\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.250972 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-metrics-certs\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251009 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-etcd-client\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251034 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-serving-cert\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251060 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4z7j\" (UniqueName: \"kubernetes.io/projected/675c0c62-9109-4128-93c9-801f66debbaf-kube-api-access-c4z7j\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251087 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab04d280-8b58-44e7-a789-f706b8c5f807-secret-volume\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251124 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5aaa49b6-0304-4205-85d4-3f23a10d25ad-images\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251141 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5mgl\" (UniqueName: \"kubernetes.io/projected/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-kube-api-access-l5mgl\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251160 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d74ebb8-a165-44d5-a5cf-17217e03be90-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251181 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405c903d-189e-4c88-9dbd-01f12df65580-config\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251211 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e046011a-da96-4097-bf84-c73160147343-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251230 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qcf\" (UniqueName: \"kubernetes.io/projected/feec37c0-15ae-4bcf-af2c-1c1622f0edd4-kube-api-access-h2qcf\") pod \"control-plane-machine-set-operator-78cbb6b69f-8hcw2\" (UID: \"feec37c0-15ae-4bcf-af2c-1c1622f0edd4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251262 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-certificates\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251279 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab04d280-8b58-44e7-a789-f706b8c5f807-config-volume\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251297 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e4516173-5c5e-465b-a405-4d6d4fe5454b-tmpfs\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251958 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.251997 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-config\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252024 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252052 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmbks\" (UniqueName: \"kubernetes.io/projected/e046011a-da96-4097-bf84-c73160147343-kube-api-access-xmbks\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252256 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4516173-5c5e-465b-a405-4d6d4fe5454b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252359 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252413 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgqd\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-kube-api-access-5jgqd\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252445 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6153e9bd-6355-4d5c-9acc-204180e45789-metrics-tls\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252472 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-default-certificate\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252723 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d74ebb8-a165-44d5-a5cf-17217e03be90-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252856 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.252909 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-audit-policies\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.253329 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-config\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.253920 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405c903d-189e-4c88-9dbd-01f12df65580-config\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.254788 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c5fb02-af08-481e-a141-649021a5df80-serving-cert\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255019 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4t7j\" (UniqueName: \"kubernetes.io/projected/929ba030-0142-47b2-81c8-e82ed1d7227b-kube-api-access-m4t7j\") pod \"ingress-canary-2xdfx\" (UID: \"929ba030-0142-47b2-81c8-e82ed1d7227b\") " pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255057 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/130ce609-755d-4564-8c8a-3b9038e201bc-signing-key\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255115 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255239 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-certificates\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255257 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5dmp\" (UniqueName: \"kubernetes.io/projected/6153e9bd-6355-4d5c-9acc-204180e45789-kube-api-access-m5dmp\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255334 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-certs\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255386 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/405c903d-189e-4c88-9dbd-01f12df65580-auth-proxy-config\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255525 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2pv\" (UniqueName: \"kubernetes.io/projected/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-kube-api-access-cr2pv\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255595 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1267c86a-bd57-4042-b853-47e17f96d636-profile-collector-cert\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255637 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-registration-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.255950 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7947311e-2d41-4e52-8b62-e27b635a889a-audit-dir\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256004 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/405c903d-189e-4c88-9dbd-01f12df65580-auth-proxy-config\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256064 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-plugins-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256113 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9dp\" (UniqueName: \"kubernetes.io/projected/e514cd18-c4ea-4758-8023-08dfdbc87717-kube-api-access-bf9dp\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256543 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7947311e-2d41-4e52-8b62-e27b635a889a-audit-dir\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256732 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dacf6074-2418-407e-a3a3-db84f33e1147-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256790 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvz9z\" (UniqueName: \"kubernetes.io/projected/7947311e-2d41-4e52-8b62-e27b635a889a-kube-api-access-dvz9z\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256828 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-socket-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256833 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256961 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/675c0c62-9109-4128-93c9-801f66debbaf-audit-dir\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.256997 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/675c0c62-9109-4128-93c9-801f66debbaf-audit-dir\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257013 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w79n\" (UniqueName: \"kubernetes.io/projected/ab04d280-8b58-44e7-a789-f706b8c5f807-kube-api-access-5w79n\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257056 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-trusted-ca\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257101 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6a4be35-59c0-429a-a499-66cb3dc85aa5-metrics-tls\") pod \"dns-operator-744455d44c-b7ws8\" (UID: \"e6a4be35-59c0-429a-a499-66cb3dc85aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257154 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257235 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/130ce609-755d-4564-8c8a-3b9038e201bc-signing-cabundle\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257353 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257392 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f5c68213-b071-467f-9243-20d2c99b520c-srv-cert\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257422 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-csi-data-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257458 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-stats-auth\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257578 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfkx\" (UniqueName: \"kubernetes.io/projected/cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08-kube-api-access-7qfkx\") pod \"package-server-manager-789f6589d5-dghzw\" (UID: \"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257704 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-bound-sa-token\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257747 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6153e9bd-6355-4d5c-9acc-204180e45789-config-volume\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257769 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-mountpoint-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257825 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacf6074-2418-407e-a3a3-db84f33e1147-config\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257882 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dghzw\" (UID: \"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257920 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9m6w\" (UniqueName: \"kubernetes.io/projected/b93bf160-39a1-43b0-a409-59b814a14258-kube-api-access-w9m6w\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257942 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d74ebb8-a165-44d5-a5cf-17217e03be90-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257959 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-encryption-config\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257977 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f5c68213-b071-467f-9243-20d2c99b520c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.257992 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4516173-5c5e-465b-a405-4d6d4fe5454b-webhook-cert\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258017 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258033 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258066 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258088 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/405c903d-189e-4c88-9dbd-01f12df65580-machine-approver-tls\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258117 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5aaa49b6-0304-4205-85d4-3f23a10d25ad-proxy-tls\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258138 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52lp5\" (UniqueName: \"kubernetes.io/projected/1267c86a-bd57-4042-b853-47e17f96d636-kube-api-access-52lp5\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258154 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c5fb02-af08-481e-a141-649021a5df80-config\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258170 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5aaa49b6-0304-4205-85d4-3f23a10d25ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258184 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258200 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-trusted-ca\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258203 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258251 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2d34f545-b950-49af-9300-d1eb2a1495eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xgxkj\" (UID: \"2d34f545-b950-49af-9300-d1eb2a1495eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258253 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e046011a-da96-4097-bf84-c73160147343-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258271 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-node-bootstrap-token\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258299 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacf6074-2418-407e-a3a3-db84f33e1147-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258320 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258341 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40cbf264-85ae-42cb-bdbc-3548603501bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258362 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-service-ca-bundle\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258386 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpvtn\" (UniqueName: \"kubernetes.io/projected/35c5fb02-af08-481e-a141-649021a5df80-kube-api-access-hpvtn\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258406 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrg2\" (UniqueName: \"kubernetes.io/projected/a73eaf04-cbe4-4af5-b602-935b5a92850c-kube-api-access-nxrg2\") pod \"migrator-59844c95c7-5jj79\" (UID: \"a73eaf04-cbe4-4af5-b602-935b5a92850c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258425 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtng\" (UniqueName: \"kubernetes.io/projected/e6a4be35-59c0-429a-a499-66cb3dc85aa5-kube-api-access-jqtng\") pod \"dns-operator-744455d44c-b7ws8\" (UID: \"e6a4be35-59c0-429a-a499-66cb3dc85aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258443 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgfk\" (UniqueName: \"kubernetes.io/projected/f5c68213-b071-467f-9243-20d2c99b520c-kube-api-access-nwgfk\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258472 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4q6\" (UniqueName: \"kubernetes.io/projected/5aaa49b6-0304-4205-85d4-3f23a10d25ad-kube-api-access-kh4q6\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258488 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-tls\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258503 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgf5g\" (UniqueName: \"kubernetes.io/projected/2d34f545-b950-49af-9300-d1eb2a1495eb-kube-api-access-zgf5g\") pod \"multus-admission-controller-857f4d67dd-xgxkj\" (UID: \"2d34f545-b950-49af-9300-d1eb2a1495eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258523 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258541 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258559 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-audit-policies\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258578 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvpw\" (UniqueName: \"kubernetes.io/projected/405c903d-189e-4c88-9dbd-01f12df65580-kube-api-access-5bvpw\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258582 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacf6074-2418-407e-a3a3-db84f33e1147-config\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258598 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.258999 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.259004 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-etcd-client\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.259168 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.259425 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1267c86a-bd57-4042-b853-47e17f96d636-srv-cert\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.260037 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40cbf264-85ae-42cb-bdbc-3548603501bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.260466 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.260562 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c5fb02-af08-481e-a141-649021a5df80-config\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.260836 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c5fb02-af08-481e-a141-649021a5df80-serving-cert\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.261623 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.261683 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.261750 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e046011a-da96-4097-bf84-c73160147343-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.261779 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2rhm\" (UniqueName: \"kubernetes.io/projected/e4516173-5c5e-465b-a405-4d6d4fe5454b-kube-api-access-c2rhm\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.261799 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e514cd18-c4ea-4758-8023-08dfdbc87717-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.261837 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdk7\" (UniqueName: \"kubernetes.io/projected/40cbf264-85ae-42cb-bdbc-3548603501bf-kube-api-access-rfdk7\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262064 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-audit-policies\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262062 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.262115 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.76207659 +0000 UTC m=+134.725107133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262276 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262296 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-serving-cert\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262348 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/929ba030-0142-47b2-81c8-e82ed1d7227b-cert\") pod \"ingress-canary-2xdfx\" (UID: \"929ba030-0142-47b2-81c8-e82ed1d7227b\") " pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262370 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262604 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.262999 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e046011a-da96-4097-bf84-c73160147343-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.263155 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7947311e-2d41-4e52-8b62-e27b635a889a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.265462 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.265478 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7947311e-2d41-4e52-8b62-e27b635a889a-encryption-config\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.266445 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.271187 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1267c86a-bd57-4042-b853-47e17f96d636-profile-collector-cert\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.271564 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/40cbf264-85ae-42cb-bdbc-3548603501bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.273426 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-tls\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.274663 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.274835 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.274889 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.275049 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.275119 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.275419 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacf6074-2418-407e-a3a3-db84f33e1147-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.275630 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d74ebb8-a165-44d5-a5cf-17217e03be90-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.275641 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.280761 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/405c903d-189e-4c88-9dbd-01f12df65580-machine-approver-tls\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.297113 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40cbf264-85ae-42cb-bdbc-3548603501bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.319066 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.342252 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpxpv\" (UniqueName: \"kubernetes.io/projected/d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf-kube-api-access-dpxpv\") pod \"ingress-operator-5b745b69d9-n7jtw\" (UID: \"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.367860 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368130 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5aaa49b6-0304-4205-85d4-3f23a10d25ad-proxy-tls\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368171 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5aaa49b6-0304-4205-85d4-3f23a10d25ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368190 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368212 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2d34f545-b950-49af-9300-d1eb2a1495eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xgxkj\" (UID: \"2d34f545-b950-49af-9300-d1eb2a1495eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.368262 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.868223353 +0000 UTC m=+134.831253896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368312 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-node-bootstrap-token\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368357 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368380 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-service-ca-bundle\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368413 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtng\" (UniqueName: \"kubernetes.io/projected/e6a4be35-59c0-429a-a499-66cb3dc85aa5-kube-api-access-jqtng\") pod \"dns-operator-744455d44c-b7ws8\" (UID: \"e6a4be35-59c0-429a-a499-66cb3dc85aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368429 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgfk\" (UniqueName: \"kubernetes.io/projected/f5c68213-b071-467f-9243-20d2c99b520c-kube-api-access-nwgfk\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368448 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4q6\" (UniqueName: \"kubernetes.io/projected/5aaa49b6-0304-4205-85d4-3f23a10d25ad-kube-api-access-kh4q6\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368476 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgf5g\" (UniqueName: \"kubernetes.io/projected/2d34f545-b950-49af-9300-d1eb2a1495eb-kube-api-access-zgf5g\") pod \"multus-admission-controller-857f4d67dd-xgxkj\" (UID: \"2d34f545-b950-49af-9300-d1eb2a1495eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368499 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368522 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368539 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2rhm\" (UniqueName: \"kubernetes.io/projected/e4516173-5c5e-465b-a405-4d6d4fe5454b-kube-api-access-c2rhm\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368556 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e514cd18-c4ea-4758-8023-08dfdbc87717-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368616 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/929ba030-0142-47b2-81c8-e82ed1d7227b-cert\") pod \"ingress-canary-2xdfx\" (UID: \"929ba030-0142-47b2-81c8-e82ed1d7227b\") " pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368636 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368670 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/feec37c0-15ae-4bcf-af2c-1c1622f0edd4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8hcw2\" (UID: \"feec37c0-15ae-4bcf-af2c-1c1622f0edd4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368701 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368726 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vtpb\" (UniqueName: \"kubernetes.io/projected/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-kube-api-access-9vtpb\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368744 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e514cd18-c4ea-4758-8023-08dfdbc87717-proxy-tls\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368776 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6bb\" (UniqueName: \"kubernetes.io/projected/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-kube-api-access-ft6bb\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368805 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfs5\" (UniqueName: \"kubernetes.io/projected/130ce609-755d-4564-8c8a-3b9038e201bc-kube-api-access-kdfs5\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368823 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-metrics-certs\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368855 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab04d280-8b58-44e7-a789-f706b8c5f807-secret-volume\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368874 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5aaa49b6-0304-4205-85d4-3f23a10d25ad-images\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368890 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5mgl\" (UniqueName: \"kubernetes.io/projected/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-kube-api-access-l5mgl\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368916 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qcf\" (UniqueName: \"kubernetes.io/projected/feec37c0-15ae-4bcf-af2c-1c1622f0edd4-kube-api-access-h2qcf\") pod \"control-plane-machine-set-operator-78cbb6b69f-8hcw2\" (UID: \"feec37c0-15ae-4bcf-af2c-1c1622f0edd4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368949 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab04d280-8b58-44e7-a789-f706b8c5f807-config-volume\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.368966 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e4516173-5c5e-465b-a405-4d6d4fe5454b-tmpfs\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.369009 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4516173-5c5e-465b-a405-4d6d4fe5454b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.369044 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6153e9bd-6355-4d5c-9acc-204180e45789-metrics-tls\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.369058 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-default-certificate\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.369081 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/130ce609-755d-4564-8c8a-3b9038e201bc-signing-key\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.369116 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4t7j\" (UniqueName: \"kubernetes.io/projected/929ba030-0142-47b2-81c8-e82ed1d7227b-kube-api-access-m4t7j\") pod \"ingress-canary-2xdfx\" (UID: \"929ba030-0142-47b2-81c8-e82ed1d7227b\") " pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.370638 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5aaa49b6-0304-4205-85d4-3f23a10d25ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.371717 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5aaa49b6-0304-4205-85d4-3f23a10d25ad-images\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.372570 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.374005 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e514cd18-c4ea-4758-8023-08dfdbc87717-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.374296 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-service-ca-bundle\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.374595 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.374936 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.376412 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6jnfq\" (UID: \"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.376568 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab04d280-8b58-44e7-a789-f706b8c5f807-secret-volume\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.376764 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab04d280-8b58-44e7-a789-f706b8c5f807-config-volume\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377327 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5dmp\" (UniqueName: \"kubernetes.io/projected/6153e9bd-6355-4d5c-9acc-204180e45789-kube-api-access-m5dmp\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.377411 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.877351311 +0000 UTC m=+134.840381854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377501 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-certs\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377574 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2pv\" (UniqueName: \"kubernetes.io/projected/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-kube-api-access-cr2pv\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377609 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-registration-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377637 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-plugins-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377658 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9dp\" (UniqueName: \"kubernetes.io/projected/e514cd18-c4ea-4758-8023-08dfdbc87717-kube-api-access-bf9dp\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377681 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377698 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-socket-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377728 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w79n\" (UniqueName: \"kubernetes.io/projected/ab04d280-8b58-44e7-a789-f706b8c5f807-kube-api-access-5w79n\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377751 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6a4be35-59c0-429a-a499-66cb3dc85aa5-metrics-tls\") pod \"dns-operator-744455d44c-b7ws8\" (UID: \"e6a4be35-59c0-429a-a499-66cb3dc85aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377772 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/130ce609-755d-4564-8c8a-3b9038e201bc-signing-cabundle\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377794 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377809 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-csi-data-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377827 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-stats-auth\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377843 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f5c68213-b071-467f-9243-20d2c99b520c-srv-cert\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377868 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfkx\" (UniqueName: \"kubernetes.io/projected/cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08-kube-api-access-7qfkx\") pod \"package-server-manager-789f6589d5-dghzw\" (UID: \"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377885 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377907 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6153e9bd-6355-4d5c-9acc-204180e45789-config-volume\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377904 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e514cd18-c4ea-4758-8023-08dfdbc87717-proxy-tls\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377924 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-mountpoint-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.378033 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dghzw\" (UID: \"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.378081 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9m6w\" (UniqueName: \"kubernetes.io/projected/b93bf160-39a1-43b0-a409-59b814a14258-kube-api-access-w9m6w\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.378146 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f5c68213-b071-467f-9243-20d2c99b520c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.378174 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4516173-5c5e-465b-a405-4d6d4fe5454b-webhook-cert\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.378207 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.378933 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/130ce609-755d-4564-8c8a-3b9038e201bc-signing-cabundle\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.379080 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377973 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-mountpoint-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.379292 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-registration-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.379344 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-plugins-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.379416 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-node-bootstrap-token\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.379496 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-socket-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.377331 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2d34f545-b950-49af-9300-d1eb2a1495eb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xgxkj\" (UID: \"2d34f545-b950-49af-9300-d1eb2a1495eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.379928 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b93bf160-39a1-43b0-a409-59b814a14258-csi-data-dir\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.383015 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dghzw\" (UID: \"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.383225 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4z7j\" (UniqueName: \"kubernetes.io/projected/675c0c62-9109-4128-93c9-801f66debbaf-kube-api-access-c4z7j\") pod \"oauth-openshift-558db77b4-sxbb8\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.383319 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6153e9bd-6355-4d5c-9acc-204180e45789-config-volume\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.383713 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5aaa49b6-0304-4205-85d4-3f23a10d25ad-proxy-tls\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.383791 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.383977 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-default-certificate\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.384342 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.384587 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6a4be35-59c0-429a-a499-66cb3dc85aa5-metrics-tls\") pod \"dns-operator-744455d44c-b7ws8\" (UID: \"e6a4be35-59c0-429a-a499-66cb3dc85aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.385610 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e4516173-5c5e-465b-a405-4d6d4fe5454b-webhook-cert\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.386206 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e4516173-5c5e-465b-a405-4d6d4fe5454b-tmpfs\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.386437 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-certs\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.387508 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-metrics-certs\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.388362 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-stats-auth\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.399321 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/feec37c0-15ae-4bcf-af2c-1c1622f0edd4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8hcw2\" (UID: \"feec37c0-15ae-4bcf-af2c-1c1622f0edd4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.399347 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f5c68213-b071-467f-9243-20d2c99b520c-srv-cert\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.399494 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/929ba030-0142-47b2-81c8-e82ed1d7227b-cert\") pod \"ingress-canary-2xdfx\" (UID: \"929ba030-0142-47b2-81c8-e82ed1d7227b\") " pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.399618 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e4516173-5c5e-465b-a405-4d6d4fe5454b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.399676 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6153e9bd-6355-4d5c-9acc-204180e45789-metrics-tls\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.400324 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/130ce609-755d-4564-8c8a-3b9038e201bc-signing-key\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.400954 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f5c68213-b071-467f-9243-20d2c99b520c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.403641 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgqd\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-kube-api-access-5jgqd\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.407895 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hvsqs"] Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.418567 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmbks\" (UniqueName: \"kubernetes.io/projected/e046011a-da96-4097-bf84-c73160147343-kube-api-access-xmbks\") pod \"openshift-apiserver-operator-796bbdcf4f-r42bn\" (UID: \"e046011a-da96-4097-bf84-c73160147343\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.438744 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvz9z\" (UniqueName: \"kubernetes.io/projected/7947311e-2d41-4e52-8b62-e27b635a889a-kube-api-access-dvz9z\") pod \"apiserver-7bbb656c7d-9bt5m\" (UID: \"7947311e-2d41-4e52-8b62-e27b635a889a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.457782 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dacf6074-2418-407e-a3a3-db84f33e1147-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fd49l\" (UID: \"dacf6074-2418-407e-a3a3-db84f33e1147\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.478890 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.479082 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.979046804 +0000 UTC m=+134.942077347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.479752 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.480082 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:13.980074682 +0000 UTC m=+134.943105215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.484171 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-bound-sa-token\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.505482 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpvtn\" (UniqueName: \"kubernetes.io/projected/35c5fb02-af08-481e-a141-649021a5df80-kube-api-access-hpvtn\") pod \"service-ca-operator-777779d784-4dvd7\" (UID: \"35c5fb02-af08-481e-a141-649021a5df80\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.533854 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrg2\" (UniqueName: \"kubernetes.io/projected/a73eaf04-cbe4-4af5-b602-935b5a92850c-kube-api-access-nxrg2\") pod \"migrator-59844c95c7-5jj79\" (UID: \"a73eaf04-cbe4-4af5-b602-935b5a92850c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.547636 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52lp5\" (UniqueName: \"kubernetes.io/projected/1267c86a-bd57-4042-b853-47e17f96d636-kube-api-access-52lp5\") pod \"catalog-operator-68c6474976-nc7g5\" (UID: \"1267c86a-bd57-4042-b853-47e17f96d636\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.559354 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvpw\" (UniqueName: \"kubernetes.io/projected/405c903d-189e-4c88-9dbd-01f12df65580-kube-api-access-5bvpw\") pod \"machine-approver-56656f9798-hn87t\" (UID: \"405c903d-189e-4c88-9dbd-01f12df65580\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.577636 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.579597 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdk7\" (UniqueName: \"kubernetes.io/projected/40cbf264-85ae-42cb-bdbc-3548603501bf-kube-api-access-rfdk7\") pod \"cluster-image-registry-operator-dc59b4c8b-66wck\" (UID: \"40cbf264-85ae-42cb-bdbc-3548603501bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.584232 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.585302 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.085274319 +0000 UTC m=+135.048304862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.594982 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.599779 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.609204 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.624430 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njqtd\" (UID: \"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.633742 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.638678 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5mgl\" (UniqueName: \"kubernetes.io/projected/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-kube-api-access-l5mgl\") pod \"marketplace-operator-79b997595-tr76d\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.644592 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.655617 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.661253 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d8t58"] Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.661798 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.665910 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vtpb\" (UniqueName: \"kubernetes.io/projected/f817e91b-c6c9-4fa8-b73f-743cf9ed97b3-kube-api-access-9vtpb\") pod \"router-default-5444994796-hn2rm\" (UID: \"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3\") " pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.680064 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2rhm\" (UniqueName: \"kubernetes.io/projected/e4516173-5c5e-465b-a405-4d6d4fe5454b-kube-api-access-c2rhm\") pod \"packageserver-d55dfcdfc-2jwjg\" (UID: \"e4516173-5c5e-465b-a405-4d6d4fe5454b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.682174 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.687006 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.687419 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.187401633 +0000 UTC m=+135.150432176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.687752 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knd2c\" (UniqueName: \"kubernetes.io/projected/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-kube-api-access-knd2c\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.690565 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.692624 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knd2c\" (UniqueName: \"kubernetes.io/projected/c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d-kube-api-access-knd2c\") pod \"cluster-samples-operator-665b6dd947-gr22l\" (UID: \"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.701477 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtng\" (UniqueName: \"kubernetes.io/projected/e6a4be35-59c0-429a-a499-66cb3dc85aa5-kube-api-access-jqtng\") pod \"dns-operator-744455d44c-b7ws8\" (UID: \"e6a4be35-59c0-429a-a499-66cb3dc85aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.701938 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.724756 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.728223 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw"] Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.736181 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgfk\" (UniqueName: \"kubernetes.io/projected/f5c68213-b071-467f-9243-20d2c99b520c-kube-api-access-nwgfk\") pod \"olm-operator-6b444d44fb-szs96\" (UID: \"f5c68213-b071-467f-9243-20d2c99b520c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.747975 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh4q6\" (UniqueName: \"kubernetes.io/projected/5aaa49b6-0304-4205-85d4-3f23a10d25ad-kube-api-access-kh4q6\") pod \"machine-config-operator-74547568cd-b4wgb\" (UID: \"5aaa49b6-0304-4205-85d4-3f23a10d25ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.759594 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.770763 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6bb\" (UniqueName: \"kubernetes.io/projected/49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9-kube-api-access-ft6bb\") pod \"kube-storage-version-migrator-operator-b67b599dd-glxdb\" (UID: \"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.772903 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.777411 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgf5g\" (UniqueName: \"kubernetes.io/projected/2d34f545-b950-49af-9300-d1eb2a1495eb-kube-api-access-zgf5g\") pod \"multus-admission-controller-857f4d67dd-xgxkj\" (UID: \"2d34f545-b950-49af-9300-d1eb2a1495eb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.791822 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.792599 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.29257211 +0000 UTC m=+135.255602653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.796498 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.811573 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfs5\" (UniqueName: \"kubernetes.io/projected/130ce609-755d-4564-8c8a-3b9038e201bc-kube-api-access-kdfs5\") pod \"service-ca-9c57cc56f-pm5c8\" (UID: \"130ce609-755d-4564-8c8a-3b9038e201bc\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.814170 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.837981 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4t7j\" (UniqueName: \"kubernetes.io/projected/929ba030-0142-47b2-81c8-e82ed1d7227b-kube-api-access-m4t7j\") pod \"ingress-canary-2xdfx\" (UID: \"929ba030-0142-47b2-81c8-e82ed1d7227b\") " pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.839290 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2xdfx" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.844745 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qcf\" (UniqueName: \"kubernetes.io/projected/feec37c0-15ae-4bcf-af2c-1c1622f0edd4-kube-api-access-h2qcf\") pod \"control-plane-machine-set-operator-78cbb6b69f-8hcw2\" (UID: \"feec37c0-15ae-4bcf-af2c-1c1622f0edd4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.874933 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5dmp\" (UniqueName: \"kubernetes.io/projected/6153e9bd-6355-4d5c-9acc-204180e45789-kube-api-access-m5dmp\") pod \"dns-default-j4l2m\" (UID: \"6153e9bd-6355-4d5c-9acc-204180e45789\") " pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.894022 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.894376 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.394362255 +0000 UTC m=+135.357392798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.908182 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2pv\" (UniqueName: \"kubernetes.io/projected/c5747e09-aac8-4ccc-b90f-2f8f61baa8e0-kube-api-access-cr2pv\") pod \"machine-config-server-v47ps\" (UID: \"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0\") " pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.915534 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9dp\" (UniqueName: \"kubernetes.io/projected/e514cd18-c4ea-4758-8023-08dfdbc87717-kube-api-access-bf9dp\") pod \"machine-config-controller-84d6567774-grjhd\" (UID: \"e514cd18-c4ea-4758-8023-08dfdbc87717\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.924148 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sxbb8"] Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.940854 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w79n\" (UniqueName: \"kubernetes.io/projected/ab04d280-8b58-44e7-a789-f706b8c5f807-kube-api-access-5w79n\") pod \"collect-profiles-29483340-rvgvt\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.982803 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfkx\" (UniqueName: \"kubernetes.io/projected/cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08-kube-api-access-7qfkx\") pod \"package-server-manager-789f6589d5-dghzw\" (UID: \"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.985937 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9m6w\" (UniqueName: \"kubernetes.io/projected/b93bf160-39a1-43b0-a409-59b814a14258-kube-api-access-w9m6w\") pod \"csi-hostpathplugin-h5nps\" (UID: \"b93bf160-39a1-43b0-a409-59b814a14258\") " pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.986533 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.994991 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.995418 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.495377768 +0000 UTC m=+135.458408311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:13 crc kubenswrapper[4959]: I0121 13:11:13.995672 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:13 crc kubenswrapper[4959]: E0121 13:11:13.996324 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.496316004 +0000 UTC m=+135.459346547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.009641 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.017717 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.034148 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.039864 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.053401 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.060295 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.084563 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.097368 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.097836 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.597802841 +0000 UTC m=+135.560833384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.104921 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.127575 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.157694 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v47ps" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.171124 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.204069 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.208982 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" event={"ID":"e8fbacbf-6d70-4d37-a123-30151512cf5f","Type":"ContainerStarted","Data":"f945d976623c1083ab3c4237be9522b07cad47ad3606e0200f20b1e0d78f7e62"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.212030 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" event={"ID":"e8fbacbf-6d70-4d37-a123-30151512cf5f","Type":"ContainerStarted","Data":"1bee13ea9239230a0918b2fdf4e228f965c4971a9223b3f08d2503fd7c67ed64"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.212063 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" event={"ID":"e8fbacbf-6d70-4d37-a123-30151512cf5f","Type":"ContainerStarted","Data":"678870555a10ba27cd6f5b5574296863070791206604b350b5eeb0494c92ede4"} Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.210185 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.710166653 +0000 UTC m=+135.673197196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.222481 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" event={"ID":"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf","Type":"ContainerStarted","Data":"4945b2646bf7abfd429312d87ab448814548975cf56650383d07fa492957f3d3"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.237943 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" event={"ID":"124c92ca-d749-4da9-afe2-f7002b29f983","Type":"ContainerStarted","Data":"e3e90e07b254b10a576396f93c550cf87193083063aa6da45218b447671b6b64"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.238333 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" event={"ID":"124c92ca-d749-4da9-afe2-f7002b29f983","Type":"ContainerStarted","Data":"4ef9c2d6b8bba1fd086eb36bb4915af207b4d74db341d6a3977cbc3b35bf2063"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.254592 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" event={"ID":"a3221620-4989-4bff-8cfc-19da6a21a2da","Type":"ContainerStarted","Data":"7155ac0d101e875b14a32c870333b0369601e6735edd65a4ce993e5157acfdc2"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.255528 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.260191 4959 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d8t58 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.260254 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.283605 4959 generic.go:334] "Generic (PLEG): container finished" podID="7a29e505-0841-4b13-9f9b-3ad6984bc580" containerID="142b91142e86757335bad549a2bb7bb015b267f78700dd9c8f615ae24128d340" exitCode=0 Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.283858 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" event={"ID":"7a29e505-0841-4b13-9f9b-3ad6984bc580","Type":"ContainerDied","Data":"142b91142e86757335bad549a2bb7bb015b267f78700dd9c8f615ae24128d340"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.283892 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" event={"ID":"7a29e505-0841-4b13-9f9b-3ad6984bc580","Type":"ContainerStarted","Data":"4980b39ab2583c659a6e765e4600ae3182c19c0300078cfc9cb5fa3d50554df1"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.314842 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.316280 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.816259064 +0000 UTC m=+135.779289607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.324278 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7pb5k" event={"ID":"a8697566-9d27-4d19-be54-2c5307ab5962","Type":"ContainerStarted","Data":"44ad9b61e042a808fd66405cfaff9cdb1a1acee0e9e8a720850a49bae8769155"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.333764 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.337446 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" event={"ID":"fc248d84-5152-4675-9b2b-596ba0b2dc7c","Type":"ContainerStarted","Data":"91790559c510c741338728f5c91334b227f5b41164a54649e5fd252316daafae"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.337490 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" event={"ID":"fc248d84-5152-4675-9b2b-596ba0b2dc7c","Type":"ContainerStarted","Data":"c6bf26cf2005fdb939306a0e339d92ed68701bb1f5844c5a8a8c7163fd520771"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.338379 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.348818 4959 generic.go:334] "Generic (PLEG): container finished" podID="1f01767f-8e58-40cf-a88d-91ffea7c6b4a" containerID="a985da9c7d469a1a8a05c7b1e7052605ac8420f053d7725e3f1d510296a7f9ed" exitCode=0 Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.348936 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" event={"ID":"1f01767f-8e58-40cf-a88d-91ffea7c6b4a","Type":"ContainerDied","Data":"a985da9c7d469a1a8a05c7b1e7052605ac8420f053d7725e3f1d510296a7f9ed"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.353054 4959 patch_prober.go:28] interesting pod/downloads-7954f5f757-7pb5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.353149 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7pb5k" podUID="a8697566-9d27-4d19-be54-2c5307ab5962" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.354289 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79"] Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.355383 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" event={"ID":"405c903d-189e-4c88-9dbd-01f12df65580","Type":"ContainerStarted","Data":"5cbd2d8cb70b8e7be5da67d5fc0d27555c4cf5a301befd39e75629bc9f8d89f7"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.367278 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" event={"ID":"fe969621-d2d3-4af2-b0ff-28657f978ca4","Type":"ContainerStarted","Data":"9dab8ba24e8af7f98a575550c978d62bc9098b4ca4d187f3e03bb4284b27556b"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.367328 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" event={"ID":"fe969621-d2d3-4af2-b0ff-28657f978ca4","Type":"ContainerStarted","Data":"581fd731d95603df05e511c9edde813c0843b36f16a1ee45ac1ea133bdcec63d"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.391194 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hn2rm" event={"ID":"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3","Type":"ContainerStarted","Data":"91626438d21f6f11a094d539a5dcdbbbc790e0a60bf9b7a698c53037aafb7770"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.424622 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.426785 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.427923 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:14.927905337 +0000 UTC m=+135.890936070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.472197 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" event={"ID":"0bd68930-fc78-45b1-b297-b60b53ad6823","Type":"ContainerStarted","Data":"7bb5a53b53f095f7e4b53beb9a4e188dc27bd0931d38bf2da6491ccf8d623c81"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.500715 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qxvm" event={"ID":"277cb73f-7c9e-46e0-bb04-4baea31ec998","Type":"ContainerStarted","Data":"aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.500786 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qxvm" event={"ID":"277cb73f-7c9e-46e0-bb04-4baea31ec998","Type":"ContainerStarted","Data":"26dd0e135cf72beabb396d8552b72290a4256fa780a88cab04e6aecb90861f8f"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.523289 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" event={"ID":"575852dc-3cb2-4a05-8fd0-aad5aef44b92","Type":"ContainerStarted","Data":"0f9d98689a1094ce1ddb8ac328db44b5342397c00f8aeff9f3febbf318dee9b2"} Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.524393 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.527291 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.528649 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.028628213 +0000 UTC m=+135.991658756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.560942 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck"] Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.567729 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5"] Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.585818 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn"] Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.628923 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.629827 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.129805631 +0000 UTC m=+136.092836174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.729679 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.730077 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.230055343 +0000 UTC m=+136.193085886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.831630 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.832164 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.332138806 +0000 UTC m=+136.295169359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.933702 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.934061 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.433830038 +0000 UTC m=+136.396860581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.934260 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:14 crc kubenswrapper[4959]: E0121 13:11:14.934889 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.434878707 +0000 UTC m=+136.397909250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:14 crc kubenswrapper[4959]: I0121 13:11:14.973386 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" Jan 21 13:11:15 crc kubenswrapper[4959]: W0121 13:11:15.018078 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40cbf264_85ae_42cb_bdbc_3548603501bf.slice/crio-320112df2bf77b5dc2cb9fd90c243b48c8f838cde69fef55bf114287fce7863e WatchSource:0}: Error finding container 320112df2bf77b5dc2cb9fd90c243b48c8f838cde69fef55bf114287fce7863e: Status 404 returned error can't find the container with id 320112df2bf77b5dc2cb9fd90c243b48c8f838cde69fef55bf114287fce7863e Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.036955 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.037743 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.5377186 +0000 UTC m=+136.500749143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.161816 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.162330 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.662312054 +0000 UTC m=+136.625342597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.267961 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.268442 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.768422416 +0000 UTC m=+136.731452959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.372786 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.373636 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.873620804 +0000 UTC m=+136.836651347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.474801 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.474938 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.974912815 +0000 UTC m=+136.937943358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.476563 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:15.976538579 +0000 UTC m=+136.939569162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.476018 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.507523 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l"] Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.539824 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr76d"] Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.578459 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.578878 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.078857809 +0000 UTC m=+137.041888352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.589705 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" event={"ID":"7a29e505-0841-4b13-9f9b-3ad6984bc580","Type":"ContainerStarted","Data":"890ca78df3e5e9b9a5708b851f16d602fa006a7b149322198ea9814005fd9601"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.613798 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" event={"ID":"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf","Type":"ContainerStarted","Data":"970544ea323b424cf285de8201e849edbc142b818414ffbc0e7a242f94af0a88"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.613862 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" event={"ID":"d5d6ec37-afb1-4a3c-abd3-ef526ce3d8bf","Type":"ContainerStarted","Data":"7be3bd65346eecc8ac499ece23e7cc43ff1eef731e42fd0277222768f4ca6b05"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.626279 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" event={"ID":"1267c86a-bd57-4042-b853-47e17f96d636","Type":"ContainerStarted","Data":"1c3d570cb650d16b802913a40c0b9ccf51ed0c22b36760aae967d35c14851c50"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.631815 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" event={"ID":"405c903d-189e-4c88-9dbd-01f12df65580","Type":"ContainerStarted","Data":"0abeedb9b82e958fffdf34a7ce9b000e2c8f48005d1fc9c62c9707b4c1b24664"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.646024 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" event={"ID":"675c0c62-9109-4128-93c9-801f66debbaf","Type":"ContainerStarted","Data":"4a5080b5ab42d8ed8eac7922688c894f81dacbc92c9b239c883fbd1a0b770098"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.653834 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" event={"ID":"e046011a-da96-4097-bf84-c73160147343","Type":"ContainerStarted","Data":"ec2c4e80b13983bf30a81d3f92f5114b2de5f07c782f12b98b58dc65bd5d431c"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.668249 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" podStartSLOduration=117.668228646 podStartE2EDuration="1m57.668228646s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.642112277 +0000 UTC m=+136.605142820" watchObservedRunningTime="2026-01-21 13:11:15.668228646 +0000 UTC m=+136.631259189" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.670943 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" event={"ID":"a73eaf04-cbe4-4af5-b602-935b5a92850c","Type":"ContainerStarted","Data":"c797abd9f8efdb6533572bf65d8c48437d5eaac9300e0335076b08e1840d3c3a"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.671016 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" event={"ID":"a73eaf04-cbe4-4af5-b602-935b5a92850c","Type":"ContainerStarted","Data":"cd79a1ee1309e5d7be4b51cb54c115960cdfad03e6db6e3197cc94b0e800cde5"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.679674 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.684517 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.182079812 +0000 UTC m=+137.145110355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.686503 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" event={"ID":"40cbf264-85ae-42cb-bdbc-3548603501bf","Type":"ContainerStarted","Data":"320112df2bf77b5dc2cb9fd90c243b48c8f838cde69fef55bf114287fce7863e"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.712542 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" event={"ID":"1f01767f-8e58-40cf-a88d-91ffea7c6b4a","Type":"ContainerStarted","Data":"b3d625bc9dbcc20cc2c4f0667f29b405823a85b8502fd2c5227be843cb5d0e74"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.713179 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.781999 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.782260 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.282224873 +0000 UTC m=+137.245255416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.782589 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.783599 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.28357828 +0000 UTC m=+137.246608823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.795623 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" podStartSLOduration=118.795602806 podStartE2EDuration="1m58.795602806s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.753076091 +0000 UTC m=+136.716106634" watchObservedRunningTime="2026-01-21 13:11:15.795602806 +0000 UTC m=+136.758633349" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.797279 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rgnqt" podStartSLOduration=118.797274192 podStartE2EDuration="1m58.797274192s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.795259977 +0000 UTC m=+136.758290510" watchObservedRunningTime="2026-01-21 13:11:15.797274192 +0000 UTC m=+136.760304735" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.835980 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pfbvr" podStartSLOduration=117.835952832 podStartE2EDuration="1m57.835952832s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.833849725 +0000 UTC m=+136.796880268" watchObservedRunningTime="2026-01-21 13:11:15.835952832 +0000 UTC m=+136.798983375" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.861020 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7pb5k" podStartSLOduration=118.860997132 podStartE2EDuration="1m58.860997132s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.860047607 +0000 UTC m=+136.823078150" watchObservedRunningTime="2026-01-21 13:11:15.860997132 +0000 UTC m=+136.824027675" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.869287 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v47ps" event={"ID":"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0","Type":"ContainerStarted","Data":"1bbddb8e1e5d29867e081045559be8230b918e35c7f367feac64df7bdda44324"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.869327 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v47ps" event={"ID":"c5747e09-aac8-4ccc-b90f-2f8f61baa8e0","Type":"ContainerStarted","Data":"5051783a5aa31bdf17e0f0f88881207c7b2fbe061c34ca2f24ba03f512b9ffdf"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.883647 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.886154 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.386127235 +0000 UTC m=+137.349157768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.895855 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rvj6z" podStartSLOduration=118.895814108 podStartE2EDuration="1m58.895814108s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.886774013 +0000 UTC m=+136.849804556" watchObservedRunningTime="2026-01-21 13:11:15.895814108 +0000 UTC m=+136.858844651" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.903331 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" event={"ID":"a3221620-4989-4bff-8cfc-19da6a21a2da","Type":"ContainerStarted","Data":"181ad4258fb81e2c46b75a5448a9838f792b353d4faa15e6beb70bc23c0427fe"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.930669 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tlw47" podStartSLOduration=117.930630904 podStartE2EDuration="1m57.930630904s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.917928309 +0000 UTC m=+136.880958852" watchObservedRunningTime="2026-01-21 13:11:15.930630904 +0000 UTC m=+136.893661447" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.935974 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.974892 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hn2rm" event={"ID":"f817e91b-c6c9-4fa8-b73f-743cf9ed97b3","Type":"ContainerStarted","Data":"7c2bc2406c91f20515fb27d5a350e3d71fc3171575bcbc7dfb24debaaf85f5fc"} Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.980429 4959 patch_prober.go:28] interesting pod/downloads-7954f5f757-7pb5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.980524 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7pb5k" podUID="a8697566-9d27-4d19-be54-2c5307ab5962" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 21 13:11:15 crc kubenswrapper[4959]: I0121 13:11:15.991016 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:15 crc kubenswrapper[4959]: E0121 13:11:15.993745 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.493729328 +0000 UTC m=+137.456759871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.014720 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.015794 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hvsqs" podStartSLOduration=118.015779847 podStartE2EDuration="1m58.015779847s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:15.97207255 +0000 UTC m=+136.935103093" watchObservedRunningTime="2026-01-21 13:11:16.015779847 +0000 UTC m=+136.978810390" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.024201 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5qxvm" podStartSLOduration=119.024170985 podStartE2EDuration="1m59.024170985s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:16.021659237 +0000 UTC m=+136.984689780" watchObservedRunningTime="2026-01-21 13:11:16.024170985 +0000 UTC m=+136.987201518" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.063367 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" podStartSLOduration=119.063343699 podStartE2EDuration="1m59.063343699s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:16.057617453 +0000 UTC m=+137.020647996" watchObservedRunningTime="2026-01-21 13:11:16.063343699 +0000 UTC m=+137.026374232" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.109140 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.115536 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.115988 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.615927837 +0000 UTC m=+137.578958390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.117344 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.123340 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2xdfx"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.163564 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg"] Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.166940 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.666918742 +0000 UTC m=+137.629949285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.167939 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.169990 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n7jtw" podStartSLOduration=118.169970145 podStartE2EDuration="1m58.169970145s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:16.168040713 +0000 UTC m=+137.131071256" watchObservedRunningTime="2026-01-21 13:11:16.169970145 +0000 UTC m=+137.133000688" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.218780 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hn2rm" podStartSLOduration=118.21875608 podStartE2EDuration="1m58.21875608s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:16.198907161 +0000 UTC m=+137.161937704" watchObservedRunningTime="2026-01-21 13:11:16.21875608 +0000 UTC m=+137.181786623" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.218960 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.219363 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.719338336 +0000 UTC m=+137.682368879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.226779 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-v47ps" podStartSLOduration=6.226749387 podStartE2EDuration="6.226749387s" podCreationTimestamp="2026-01-21 13:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:16.219410658 +0000 UTC m=+137.182441201" watchObservedRunningTime="2026-01-21 13:11:16.226749387 +0000 UTC m=+137.189779930" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.323960 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.324637 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.824623626 +0000 UTC m=+137.787654169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: W0121 13:11:16.369531 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f8d1e57_4cb7_4854_99b7_5e69f89d2dd7.slice/crio-a87ea988e2cdc5952036ff53c910e07d00d2eaf3b8802ab76e5aaeb7dfb87253 WatchSource:0}: Error finding container a87ea988e2cdc5952036ff53c910e07d00d2eaf3b8802ab76e5aaeb7dfb87253: Status 404 returned error can't find the container with id a87ea988e2cdc5952036ff53c910e07d00d2eaf3b8802ab76e5aaeb7dfb87253 Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.425764 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.426311 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:16.926284117 +0000 UTC m=+137.889314670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.531053 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.531963 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.031944317 +0000 UTC m=+137.994974860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.640215 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.641058 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.14102261 +0000 UTC m=+138.104053153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.704141 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.742639 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.743064 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.243040281 +0000 UTC m=+138.206070824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.752784 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xgxkj"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.760751 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.775443 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:16 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:16 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:16 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.775536 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.786632 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.844002 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.844534 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.344496817 +0000 UTC m=+138.307527360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.890218 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.899349 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b7ws8"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.917763 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.928679 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb"] Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.945640 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:16 crc kubenswrapper[4959]: E0121 13:11:16.945961 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.445947513 +0000 UTC m=+138.408978056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:16 crc kubenswrapper[4959]: I0121 13:11:16.947321 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j4l2m"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.027061 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" event={"ID":"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb","Type":"ContainerStarted","Data":"228b8dc7269f67c1f5c915db6ccc26e6661b29addfc408e6aca3b907ddbbe3f5"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.030709 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" event={"ID":"7947311e-2d41-4e52-8b62-e27b635a889a","Type":"ContainerStarted","Data":"06a5fbfe39c33e9a090e2596183007f30f83d3abe88ea5dab3cdba2086388ece"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.031912 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" event={"ID":"1267c86a-bd57-4042-b853-47e17f96d636","Type":"ContainerStarted","Data":"76a2fbd6bb4f8691b987f0e5af761a3f7785c7d751576bd89a9a3ea926c47fdb"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.032872 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.033925 4959 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nc7g5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.033981 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" podUID="1267c86a-bd57-4042-b853-47e17f96d636" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.039332 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" event={"ID":"675c0c62-9109-4128-93c9-801f66debbaf","Type":"ContainerStarted","Data":"afa0ce1b89b63f6aaf3f543abb5c988ab62283b0c263fb3ef39a28901d74ef3a"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.040022 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.042490 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" event={"ID":"dacf6074-2418-407e-a3a3-db84f33e1147","Type":"ContainerStarted","Data":"aeab308d96b83af62d4137ea35716dd5ab15443b2b1c4f8713b8256ac0b72643"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.046678 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.047057 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.547035429 +0000 UTC m=+138.510065972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.059932 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" event={"ID":"40cbf264-85ae-42cb-bdbc-3548603501bf","Type":"ContainerStarted","Data":"20c6f05fc62340f44d18623c44c77df25df178af1d33382671cb148e2805e9fe"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.060396 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" podStartSLOduration=119.060374461 podStartE2EDuration="1m59.060374461s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:17.059063246 +0000 UTC m=+138.022093789" watchObservedRunningTime="2026-01-21 13:11:17.060374461 +0000 UTC m=+138.023405004" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.061828 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" event={"ID":"f5c68213-b071-467f-9243-20d2c99b520c","Type":"ContainerStarted","Data":"2aa3b33cb0e869960c7d993379af3d2e3217971c83f6b11aaec8abc8d6093e5f"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.062671 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" event={"ID":"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7","Type":"ContainerStarted","Data":"a87ea988e2cdc5952036ff53c910e07d00d2eaf3b8802ab76e5aaeb7dfb87253"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.074707 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2xdfx" event={"ID":"929ba030-0142-47b2-81c8-e82ed1d7227b","Type":"ContainerStarted","Data":"ecc90722f756001ef111ce84aa5ae930b3b27b61aec1abaf83ce9d5de9c6f067"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.088822 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" podStartSLOduration=119.088800423 podStartE2EDuration="1m59.088800423s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:17.087476547 +0000 UTC m=+138.050507090" watchObservedRunningTime="2026-01-21 13:11:17.088800423 +0000 UTC m=+138.051830966" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.089350 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" event={"ID":"405c903d-189e-4c88-9dbd-01f12df65580","Type":"ContainerStarted","Data":"201ccb4486d69c81640855771daed835fe3763858b122edce3b569ea406e0c8f"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.118151 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66wck" podStartSLOduration=119.11812804 podStartE2EDuration="1m59.11812804s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:17.115633792 +0000 UTC m=+138.078664355" watchObservedRunningTime="2026-01-21 13:11:17.11812804 +0000 UTC m=+138.081158583" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.135536 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.137523 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" event={"ID":"a73eaf04-cbe4-4af5-b602-935b5a92850c","Type":"ContainerStarted","Data":"1d7c1938161fc6f7400ccd3276954ae0f79cd9df6f5eb57c721aca3152dd77fb"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.161476 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.171473 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.172128 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hn87t" podStartSLOduration=120.172079305 podStartE2EDuration="2m0.172079305s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:17.157916361 +0000 UTC m=+138.120946904" watchObservedRunningTime="2026-01-21 13:11:17.172079305 +0000 UTC m=+138.135109848" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.174523 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" event={"ID":"7a29e505-0841-4b13-9f9b-3ad6984bc580","Type":"ContainerStarted","Data":"e2b5fbc29b05d5f15094b898d88b274db79289b85376a8fc98b5804f7f32f45a"} Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.175663 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.675638862 +0000 UTC m=+138.638669415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.178713 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h5nps"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.209760 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.223349 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" event={"ID":"2d34f545-b950-49af-9300-d1eb2a1495eb","Type":"ContainerStarted","Data":"a79e30aeddb0aaeca7076376e7ba241051c6700699d0f8f5fe52b02f830ca8d9"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.261312 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" event={"ID":"e046011a-da96-4097-bf84-c73160147343","Type":"ContainerStarted","Data":"334b9b4612c96ae2f0a835076f70693b954284e9e51a96f795aaa45b0c78d9e6"} Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.261837 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5jj79" podStartSLOduration=119.261800782 podStartE2EDuration="1m59.261800782s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:17.249982561 +0000 UTC m=+138.213013114" watchObservedRunningTime="2026-01-21 13:11:17.261800782 +0000 UTC m=+138.224831325" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.278055 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pm5c8"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.278625 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.279552 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" event={"ID":"e4516173-5c5e-465b-a405-4d6d4fe5454b","Type":"ContainerStarted","Data":"9632e1036e693722d2c176bc48335ef71290bd5b88aca85f34d296ce344f715b"} Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.279947 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.779920395 +0000 UTC m=+138.742950938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.284219 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw"] Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.300061 4959 patch_prober.go:28] interesting pod/downloads-7954f5f757-7pb5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.310279 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7pb5k" podUID="a8697566-9d27-4d19-be54-2c5307ab5962" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.379979 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.383826 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.883810207 +0000 UTC m=+138.846840750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.481961 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.482773 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:17.982757675 +0000 UTC m=+138.945788218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.584085 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.584420 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.084405896 +0000 UTC m=+139.047436429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.684721 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.684943 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.184913146 +0000 UTC m=+139.147943689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.685066 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.685596 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.185588824 +0000 UTC m=+139.148619357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.753393 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.768571 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:17 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:17 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:17 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.768631 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.789195 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r42bn" podStartSLOduration=119.789174598 podStartE2EDuration="1m59.789174598s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:17.315585804 +0000 UTC m=+138.278616347" watchObservedRunningTime="2026-01-21 13:11:17.789174598 +0000 UTC m=+138.752205141" Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.790549 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.790993 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.290975817 +0000 UTC m=+139.254006360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:17 crc kubenswrapper[4959]: I0121 13:11:17.891724 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:17 crc kubenswrapper[4959]: E0121 13:11:17.892079 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.392067012 +0000 UTC m=+139.355097555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.001588 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.002050 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.502034129 +0000 UTC m=+139.465064672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.104599 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.105175 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.605157131 +0000 UTC m=+139.568187674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.205810 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.206219 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.706195615 +0000 UTC m=+139.669226158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.285075 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfc9p" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.308819 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.309149 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.8091337 +0000 UTC m=+139.772164243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.380672 4959 generic.go:334] "Generic (PLEG): container finished" podID="7947311e-2d41-4e52-8b62-e27b635a889a" containerID="6fae212a096a0abdb0db2d6f5e87f3044401adbcaaf9c1d71a4130190c9652cd" exitCode=0 Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.381681 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" event={"ID":"7947311e-2d41-4e52-8b62-e27b635a889a","Type":"ContainerDied","Data":"6fae212a096a0abdb0db2d6f5e87f3044401adbcaaf9c1d71a4130190c9652cd"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.402550 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" event={"ID":"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3","Type":"ContainerStarted","Data":"8162505a3e4b660c5c3c9d7a26d304960a5661b122ea87ace6e379bb1288283c"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.409235 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.409675 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:18.9096568 +0000 UTC m=+139.872687343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.423995 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" event={"ID":"e4516173-5c5e-465b-a405-4d6d4fe5454b","Type":"ContainerStarted","Data":"dc6f79824183e17347127f05c646d180c1fb068b5402858f0e249958e720088a"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.425117 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.425324 4959 csr.go:261] certificate signing request csr-2xlqh is approved, waiting to be issued Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.465351 4959 csr.go:257] certificate signing request csr-2xlqh is issued Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.513561 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.516293 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.016275677 +0000 UTC m=+139.979306210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.538410 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" event={"ID":"b93bf160-39a1-43b0-a409-59b814a14258","Type":"ContainerStarted","Data":"6cc52a0026bcbd651658325b948c0efd3a0fdb63a65c7eb064320977660e0130"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.606892 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" event={"ID":"ab04d280-8b58-44e7-a789-f706b8c5f807","Type":"ContainerStarted","Data":"021f2ec969185f1f287f423330c4925953e7ff57f06fe920de426b108ad01138"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.607289 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" event={"ID":"ab04d280-8b58-44e7-a789-f706b8c5f807","Type":"ContainerStarted","Data":"df3af7a47a4834cce98f7e316d2c29477f3aba74507040c7f7d081efcdd8d572"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.616165 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.616627 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.116610062 +0000 UTC m=+140.079640605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.671013 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" event={"ID":"5aaa49b6-0304-4205-85d4-3f23a10d25ad","Type":"ContainerStarted","Data":"35b546a4985a2643e411a21bd0685739b1973b1f5d9a7d151cbc3f83077aae41"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.671079 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" event={"ID":"5aaa49b6-0304-4205-85d4-3f23a10d25ad","Type":"ContainerStarted","Data":"dc37a942a036c7997b9376d309cabde011337354684540132485258bede97057"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.673802 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" podStartSLOduration=120.673784925 podStartE2EDuration="2m0.673784925s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.575508775 +0000 UTC m=+139.538539318" watchObservedRunningTime="2026-01-21 13:11:18.673784925 +0000 UTC m=+139.636815468" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.673930 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" podStartSLOduration=120.673924959 podStartE2EDuration="2m0.673924959s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.671499763 +0000 UTC m=+139.634530316" watchObservedRunningTime="2026-01-21 13:11:18.673924959 +0000 UTC m=+139.636955512" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.681654 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" event={"ID":"e6a4be35-59c0-429a-a499-66cb3dc85aa5","Type":"ContainerStarted","Data":"6784c968c8ae88ffccbcf703e3081b5e53af1767b2977813d8b51d2add6a33e9"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.685649 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" event={"ID":"35c5fb02-af08-481e-a141-649021a5df80","Type":"ContainerStarted","Data":"52249d002745e91bf3d0e9546edcbf29d45ca11db2c3b5c399d24e9378832b87"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.692322 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" event={"ID":"130ce609-755d-4564-8c8a-3b9038e201bc","Type":"ContainerStarted","Data":"79e24d6b7228877841bc16a0253953d7e345de2a9d533f1aa61d2c7e15ad4b4a"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.699238 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" event={"ID":"feec37c0-15ae-4bcf-af2c-1c1622f0edd4","Type":"ContainerStarted","Data":"0d42a5033cee95940b444a54cf3b37c77cda078bf39a89a38aa3c7de336ef4da"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.699292 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" event={"ID":"feec37c0-15ae-4bcf-af2c-1c1622f0edd4","Type":"ContainerStarted","Data":"a53d77978125db5136ebd09e66c91f70ef08802ba1ca1be8019f0ae810e3bb92"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.717936 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.719070 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.219055055 +0000 UTC m=+140.182085598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.740887 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" event={"ID":"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb","Type":"ContainerStarted","Data":"1bcb24eb4294b135dc425adf3e17e57e7033a4322754e44501b625ce697d3bf5"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.742260 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.744670 4959 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr76d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.744741 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.751939 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8hcw2" podStartSLOduration=120.751918267 podStartE2EDuration="2m0.751918267s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.745518714 +0000 UTC m=+139.708549257" watchObservedRunningTime="2026-01-21 13:11:18.751918267 +0000 UTC m=+139.714948810" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.752824 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" podStartSLOduration=120.752816002 podStartE2EDuration="2m0.752816002s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.705009933 +0000 UTC m=+139.668040476" watchObservedRunningTime="2026-01-21 13:11:18.752816002 +0000 UTC m=+139.715846545" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.753919 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" event={"ID":"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d","Type":"ContainerStarted","Data":"9acc9a0d4461c5706c3746e17e66bbf68e17fba48a148bd7665dbb4fdc0d1caa"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.753953 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" event={"ID":"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d","Type":"ContainerStarted","Data":"c3f9cd20369436c6f4f8fdeaa7859006153adb85c4b79eafd32b34cf8c8d6811"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.768982 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" event={"ID":"e514cd18-c4ea-4758-8023-08dfdbc87717","Type":"ContainerStarted","Data":"2ed08d846f0f9d621937bb98fa6d66638c07437ecb70fb8038f72111fce36776"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.769043 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" event={"ID":"e514cd18-c4ea-4758-8023-08dfdbc87717","Type":"ContainerStarted","Data":"4aaabad90c18f1be3e2a73831f40e65bfcbf46e1ff705c15d169a6c66739b8f8"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.773374 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:18 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:18 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:18 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.773426 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.779163 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" event={"ID":"2d34f545-b950-49af-9300-d1eb2a1495eb","Type":"ContainerStarted","Data":"9f63b84ad4aff3052dfd531b642eada17d6fbeadb2db0d0f7872e61d6ea4c74c"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.779879 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" podStartSLOduration=120.779841016 podStartE2EDuration="2m0.779841016s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.778088588 +0000 UTC m=+139.741119131" watchObservedRunningTime="2026-01-21 13:11:18.779841016 +0000 UTC m=+139.742871559" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.784887 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" event={"ID":"6f8d1e57-4cb7-4854-99b7-5e69f89d2dd7","Type":"ContainerStarted","Data":"6eeeb74868bb558af0b00112a39f50131e7713d147c0bce624d031e64c48269f"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.829720 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.830980 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.330963725 +0000 UTC m=+140.293994268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.851497 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" event={"ID":"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9","Type":"ContainerStarted","Data":"2bb476994d48f468665c1ed18109e259cb352ea95724663db03e614e029d466e"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.851559 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" event={"ID":"49f2207f-8a7d-4f0b-8dfd-91f4a529e2c9","Type":"ContainerStarted","Data":"2c6ae175bc54a237f90c7fdc66e40433d287c86df82f5aca7496380fa827b061"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.869284 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j4l2m" event={"ID":"6153e9bd-6355-4d5c-9acc-204180e45789","Type":"ContainerStarted","Data":"0a1bec5809d0470c86f6d8722763723d1882587453f8a3907298f19d93090d8a"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.889250 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" event={"ID":"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08","Type":"ContainerStarted","Data":"b07912beabadb935dc447be6ec2d11568a34233593e2e45cab5dcfd5654b7edd"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.889309 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" event={"ID":"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08","Type":"ContainerStarted","Data":"800d1114a5859194ede04058161d85851dc694f13eb9b6c81333ae3843a3a339"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.896397 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-glxdb" podStartSLOduration=120.895902658 podStartE2EDuration="2m0.895902658s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.894968193 +0000 UTC m=+139.857998736" watchObservedRunningTime="2026-01-21 13:11:18.895902658 +0000 UTC m=+139.858933201" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.896518 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6jnfq" podStartSLOduration=120.896513785 podStartE2EDuration="2m0.896513785s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.817660983 +0000 UTC m=+139.780691526" watchObservedRunningTime="2026-01-21 13:11:18.896513785 +0000 UTC m=+139.859544328" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.903685 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" event={"ID":"dacf6074-2418-407e-a3a3-db84f33e1147","Type":"ContainerStarted","Data":"e2ad420e4b3d431485e42749d4ac32e273303a26504d5e3ebbc7e7be2d68cdd7"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.936329 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" event={"ID":"f5c68213-b071-467f-9243-20d2c99b520c","Type":"ContainerStarted","Data":"b5b117f12988ead7364b971334547fafd5902e189f8e4b95550990ee26bec3a3"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.937145 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.938068 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:18 crc kubenswrapper[4959]: E0121 13:11:18.938425 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.438412643 +0000 UTC m=+140.401443186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.938424 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fd49l" podStartSLOduration=120.938398433 podStartE2EDuration="2m0.938398433s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.936436849 +0000 UTC m=+139.899467392" watchObservedRunningTime="2026-01-21 13:11:18.938398433 +0000 UTC m=+139.901428986" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.952452 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2xdfx" event={"ID":"929ba030-0142-47b2-81c8-e82ed1d7227b","Type":"ContainerStarted","Data":"8f82986c352f354649affed1f840182311552df124f3f91d1bd1014bddb4bc9f"} Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.964638 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.966565 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-szs96" podStartSLOduration=120.966552017 podStartE2EDuration="2m0.966552017s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:18.964996555 +0000 UTC m=+139.928027118" watchObservedRunningTime="2026-01-21 13:11:18.966552017 +0000 UTC m=+139.929582560" Jan 21 13:11:18 crc kubenswrapper[4959]: I0121 13:11:18.968539 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nc7g5" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.037702 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" podStartSLOduration=121.037682529 podStartE2EDuration="2m1.037682529s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:19.01449997 +0000 UTC m=+139.977530543" watchObservedRunningTime="2026-01-21 13:11:19.037682529 +0000 UTC m=+140.000713072" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.039529 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2xdfx" podStartSLOduration=9.039519329 podStartE2EDuration="9.039519329s" podCreationTimestamp="2026-01-21 13:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:19.034204155 +0000 UTC m=+139.997234728" watchObservedRunningTime="2026-01-21 13:11:19.039519329 +0000 UTC m=+140.002549872" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.040447 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.042212 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.542186192 +0000 UTC m=+140.505216735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.142988 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.143438 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.643423712 +0000 UTC m=+140.606454255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.246361 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.246758 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.746739098 +0000 UTC m=+140.709769641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.348087 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.348554 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.848538343 +0000 UTC m=+140.811568886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.427388 4959 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2jwjg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.427461 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" podUID="e4516173-5c5e-465b-a405-4d6d4fe5454b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.449175 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.449938 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:19.949918437 +0000 UTC m=+140.912948970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.468615 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 13:06:18 +0000 UTC, rotation deadline is 2026-11-05 01:41:22.047845553 +0000 UTC Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.468663 4959 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6900h30m2.579186727s for next certificate rotation Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.553364 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.553797 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.053783148 +0000 UTC m=+141.016813691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.655247 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.655420 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.155389358 +0000 UTC m=+141.118419911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.655679 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.656143 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.156132478 +0000 UTC m=+141.119163021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.723679 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tv8w6"] Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.724584 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.732526 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.744541 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv8w6"] Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.757250 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.758255 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.258232112 +0000 UTC m=+141.221262645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.795842 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:19 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:19 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:19 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.795904 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.859694 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.859803 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vpg4\" (UniqueName: \"kubernetes.io/projected/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-kube-api-access-5vpg4\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.859872 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-catalog-content\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.859900 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-utilities\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.860344 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.360330505 +0000 UTC m=+141.323361048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.922067 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x66q8"] Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.923154 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:19 crc kubenswrapper[4959]: W0121 13:11:19.926431 4959 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.926507 4959 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.952171 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x66q8"] Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.961012 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.961402 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vpg4\" (UniqueName: \"kubernetes.io/projected/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-kube-api-access-5vpg4\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.961469 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-catalog-content\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.961495 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-utilities\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.962041 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-utilities\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: E0121 13:11:19.962161 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.46213156 +0000 UTC m=+141.425162103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.962410 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-catalog-content\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.968522 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" event={"ID":"fc96c3c8-b3c2-4f54-bc7c-3c2bb8822bc3","Type":"ContainerStarted","Data":"b9c55571781ae6535914175bb1211006b58b7bf6b7b3019783faa2b2939b3c6b"} Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.970535 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" event={"ID":"c3483ce2-ed4e-42a8-b32e-7d4a41e3cf4d","Type":"ContainerStarted","Data":"f5abd1beea1c53170d2698d1dcd2c41e32708586ce3ee59a45cd22fde028f176"} Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.980973 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j4l2m" event={"ID":"6153e9bd-6355-4d5c-9acc-204180e45789","Type":"ContainerStarted","Data":"83e1b58b27e383f868ef3f8a06c2e3b91b76391db63df2a03c5d116c8ac08c61"} Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.986130 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" event={"ID":"cc5e4d1b-a4fe-40af-b0c1-e45ea5d73c08","Type":"ContainerStarted","Data":"cca817d0b89a5c20ad27bb695b253ab27388de91125673c72da6ff3e333f0c7a"} Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.986479 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:19 crc kubenswrapper[4959]: I0121 13:11:19.988547 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" event={"ID":"130ce609-755d-4564-8c8a-3b9038e201bc","Type":"ContainerStarted","Data":"4d42bda02f7d6a1bb7f35b45a689f723b70de3cc4cc841c284292f8f4d5508c1"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.001615 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gr22l" podStartSLOduration=123.001599312 podStartE2EDuration="2m3.001599312s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.000659687 +0000 UTC m=+140.963690250" watchObservedRunningTime="2026-01-21 13:11:20.001599312 +0000 UTC m=+140.964629855" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.006483 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" event={"ID":"5aaa49b6-0304-4205-85d4-3f23a10d25ad","Type":"ContainerStarted","Data":"06e463a39d92b6b9320ecdd41723ecc06caa154e39550bcf94fa1558615190c7"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.013382 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vpg4\" (UniqueName: \"kubernetes.io/projected/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-kube-api-access-5vpg4\") pod \"certified-operators-tv8w6\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.023533 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" event={"ID":"7947311e-2d41-4e52-8b62-e27b635a889a","Type":"ContainerStarted","Data":"ff23cb7722089c477c9df237de5cf1674eaa55b8a5fa3c7a5be493a436f3d558"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.035694 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njqtd" podStartSLOduration=122.035677388 podStartE2EDuration="2m2.035677388s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.031584647 +0000 UTC m=+140.994615190" watchObservedRunningTime="2026-01-21 13:11:20.035677388 +0000 UTC m=+140.998707931" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.044254 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" event={"ID":"2d34f545-b950-49af-9300-d1eb2a1495eb","Type":"ContainerStarted","Data":"f7fad054a558d86273513903198a3aee0de04b1ffbdf6f5e8773ec3564a3190a"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.062678 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-utilities\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.063022 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.063054 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-catalog-content\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.063077 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrbkb\" (UniqueName: \"kubernetes.io/projected/c01bf13b-8ada-46de-a969-cb5691c8d1c0-kube-api-access-xrbkb\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.064368 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.564349447 +0000 UTC m=+141.527380070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.081256 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" podStartSLOduration=122.081222235 podStartE2EDuration="2m2.081222235s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.076739163 +0000 UTC m=+141.039769716" watchObservedRunningTime="2026-01-21 13:11:20.081222235 +0000 UTC m=+141.044252778" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.082568 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dvd7" event={"ID":"35c5fb02-af08-481e-a141-649021a5df80","Type":"ContainerStarted","Data":"aaa21a70a70c974e04067c8ca32195ea441be9973966b7b2a835dff1e920e576"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.107597 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pm5c8" podStartSLOduration=122.107576611 podStartE2EDuration="2m2.107576611s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.101323381 +0000 UTC m=+141.064353924" watchObservedRunningTime="2026-01-21 13:11:20.107576611 +0000 UTC m=+141.070607154" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.110598 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.119151 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" event={"ID":"b93bf160-39a1-43b0-a409-59b814a14258","Type":"ContainerStarted","Data":"28b7240243980e61c61f59a5c3252873413d9cfe488efd2460b551c84b25f17e"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.123309 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r59pr"] Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.127954 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.165453 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" event={"ID":"e514cd18-c4ea-4758-8023-08dfdbc87717","Type":"ContainerStarted","Data":"c9e5074a534dc63f0a18762433124d02e0c4d519cc07b0f11a9d5957468bc1f3"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.169361 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b4wgb" podStartSLOduration=122.169334929 podStartE2EDuration="2m2.169334929s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.165053462 +0000 UTC m=+141.128084015" watchObservedRunningTime="2026-01-21 13:11:20.169334929 +0000 UTC m=+141.132365472" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.170982 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r59pr"] Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181169 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181436 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-utilities\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181468 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-catalog-content\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181565 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-catalog-content\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181586 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrbkb\" (UniqueName: \"kubernetes.io/projected/c01bf13b-8ada-46de-a969-cb5691c8d1c0-kube-api-access-xrbkb\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181605 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2cf\" (UniqueName: \"kubernetes.io/projected/aea8e71a-36ca-4b96-8599-18a0b725e373-kube-api-access-pl2cf\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.181718 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-utilities\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.181822 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.681804777 +0000 UTC m=+141.644835320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.183272 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-utilities\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.184714 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-catalog-content\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.203046 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" event={"ID":"e6a4be35-59c0-429a-a499-66cb3dc85aa5","Type":"ContainerStarted","Data":"94b04fc6d416e4be09be86286ccc20a13c7a01c9880c30ebd4da5926b3720aa8"} Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.214735 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" podStartSLOduration=122.214710601 podStartE2EDuration="2m2.214710601s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.206241701 +0000 UTC m=+141.169272244" watchObservedRunningTime="2026-01-21 13:11:20.214710601 +0000 UTC m=+141.177741144" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.215903 4959 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr76d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.227393 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.251937 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrbkb\" (UniqueName: \"kubernetes.io/projected/c01bf13b-8ada-46de-a969-cb5691c8d1c0-kube-api-access-xrbkb\") pod \"community-operators-x66q8\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.284366 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-catalog-content\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.284533 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.284644 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2cf\" (UniqueName: \"kubernetes.io/projected/aea8e71a-36ca-4b96-8599-18a0b725e373-kube-api-access-pl2cf\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.285018 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-utilities\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.288042 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-catalog-content\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.292327 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-utilities\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.309633 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.809607189 +0000 UTC m=+141.772637732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.321339 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grjhd" podStartSLOduration=122.321312997 podStartE2EDuration="2m2.321312997s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.268473702 +0000 UTC m=+141.231504255" watchObservedRunningTime="2026-01-21 13:11:20.321312997 +0000 UTC m=+141.284343540" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.323141 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xgxkj" podStartSLOduration=122.323134016 podStartE2EDuration="2m2.323134016s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:20.320793943 +0000 UTC m=+141.283824486" watchObservedRunningTime="2026-01-21 13:11:20.323134016 +0000 UTC m=+141.286164559" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.335178 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2cf\" (UniqueName: \"kubernetes.io/projected/aea8e71a-36ca-4b96-8599-18a0b725e373-kube-api-access-pl2cf\") pod \"certified-operators-r59pr\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.348710 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vf8h2"] Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.350077 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.389053 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.389854 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-catalog-content\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.389888 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2bhl\" (UniqueName: \"kubernetes.io/projected/27a81382-d955-4658-8b95-0bbbaf788ecf-kube-api-access-g2bhl\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.389926 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-utilities\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.390170 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.890146956 +0000 UTC m=+141.853177499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.436167 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vf8h2"] Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.490924 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-catalog-content\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.490978 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2bhl\" (UniqueName: \"kubernetes.io/projected/27a81382-d955-4658-8b95-0bbbaf788ecf-kube-api-access-g2bhl\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.491004 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-utilities\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.491045 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.491353 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:20.991342095 +0000 UTC m=+141.954372638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.491862 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-catalog-content\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.492569 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-utilities\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.504112 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.513747 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2bhl\" (UniqueName: \"kubernetes.io/projected/27a81382-d955-4658-8b95-0bbbaf788ecf-kube-api-access-g2bhl\") pod \"community-operators-vf8h2\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.536001 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv8w6"] Jan 21 13:11:20 crc kubenswrapper[4959]: W0121 13:11:20.540870 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3b719b_be6f_4a11_a13c_ba1bfca953a7.slice/crio-6653cab18475ee956ce9996bd27de8711a19eb16eecd39a28ba6adbe5253d337 WatchSource:0}: Error finding container 6653cab18475ee956ce9996bd27de8711a19eb16eecd39a28ba6adbe5253d337: Status 404 returned error can't find the container with id 6653cab18475ee956ce9996bd27de8711a19eb16eecd39a28ba6adbe5253d337 Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.592768 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.592978 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.092940005 +0000 UTC m=+142.055970548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.593144 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.593549 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.093541161 +0000 UTC m=+142.056571704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.694871 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.695460 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.195443229 +0000 UTC m=+142.158473772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.719927 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r59pr"] Jan 21 13:11:20 crc kubenswrapper[4959]: W0121 13:11:20.728729 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaea8e71a_36ca_4b96_8599_18a0b725e373.slice/crio-585a200515c12f1a581d3b5ab5c691361e944aa486d8c3ff7c77aba29fc30928 WatchSource:0}: Error finding container 585a200515c12f1a581d3b5ab5c691361e944aa486d8c3ff7c77aba29fc30928: Status 404 returned error can't find the container with id 585a200515c12f1a581d3b5ab5c691361e944aa486d8c3ff7c77aba29fc30928 Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.765281 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:20 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:20 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:20 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.765350 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.798528 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.799213 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.299186977 +0000 UTC m=+142.262217720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.807726 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.813084 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.817688 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:11:20 crc kubenswrapper[4959]: I0121 13:11:20.900874 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:20 crc kubenswrapper[4959]: E0121 13:11:20.904633 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.404602521 +0000 UTC m=+142.367633084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.002655 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.003726 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.503708673 +0000 UTC m=+142.466739226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.063474 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vf8h2"] Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.106265 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.106642 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.606627248 +0000 UTC m=+142.569657791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.122759 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x66q8"] Jan 21 13:11:21 crc kubenswrapper[4959]: W0121 13:11:21.130629 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc01bf13b_8ada_46de_a969_cb5691c8d1c0.slice/crio-6c6b80fe0639faca2a7e90675cce7055edbcc1539397338db1c91e7e773a2e62 WatchSource:0}: Error finding container 6c6b80fe0639faca2a7e90675cce7055edbcc1539397338db1c91e7e773a2e62: Status 404 returned error can't find the container with id 6c6b80fe0639faca2a7e90675cce7055edbcc1539397338db1c91e7e773a2e62 Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.205523 4959 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2jwjg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.205576 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" podUID="e4516173-5c5e-465b-a405-4d6d4fe5454b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.207378 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.207956 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.7079224 +0000 UTC m=+142.670952993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.208107 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerStarted","Data":"6c6b80fe0639faca2a7e90675cce7055edbcc1539397338db1c91e7e773a2e62"} Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.210650 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv8w6" event={"ID":"ea3b719b-be6f-4a11-a13c-ba1bfca953a7","Type":"ContainerStarted","Data":"6653cab18475ee956ce9996bd27de8711a19eb16eecd39a28ba6adbe5253d337"} Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.211777 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf8h2" event={"ID":"27a81382-d955-4658-8b95-0bbbaf788ecf","Type":"ContainerStarted","Data":"fbca5dddce6c8b43cb40765fa90aa08591ed5594a539e0b56d3ad048e080f20d"} Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.212849 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r59pr" event={"ID":"aea8e71a-36ca-4b96-8599-18a0b725e373","Type":"ContainerStarted","Data":"585a200515c12f1a581d3b5ab5c691361e944aa486d8c3ff7c77aba29fc30928"} Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.213646 4959 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr76d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.213697 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.308487 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.308642 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.808621535 +0000 UTC m=+142.771652098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.310268 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.310603 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.810582398 +0000 UTC m=+142.773612941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.380003 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.380110 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.411831 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.412259 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.912218549 +0000 UTC m=+142.875249092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.412622 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.413011 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:21.91299074 +0000 UTC m=+142.876021283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.514324 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.514492 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.014466846 +0000 UTC m=+142.977497389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.514910 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.515355 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.01533928 +0000 UTC m=+142.978369823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.615758 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.616053 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.116026315 +0000 UTC m=+143.079056868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.617422 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.617727 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.117716601 +0000 UTC m=+143.080747144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.645441 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2jwjg" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.720633 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.721176 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.22115364 +0000 UTC m=+143.184184183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.721949 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qhqcb"] Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.723986 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.726773 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.750238 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhqcb"] Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.764804 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:21 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:21 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:21 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.764891 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.822388 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.822460 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-utilities\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.822502 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtncs\" (UniqueName: \"kubernetes.io/projected/0220f7cc-761e-4995-aa56-6c543cd5a294-kube-api-access-qtncs\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.822522 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-catalog-content\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.822796 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.32277686 +0000 UTC m=+143.285807403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.923958 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.924569 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-utilities\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.924630 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtncs\" (UniqueName: \"kubernetes.io/projected/0220f7cc-761e-4995-aa56-6c543cd5a294-kube-api-access-qtncs\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.924652 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-catalog-content\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.925866 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-catalog-content\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: E0121 13:11:21.925964 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.425942482 +0000 UTC m=+143.388973025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.926197 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-utilities\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.928750 4959 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 13:11:21 crc kubenswrapper[4959]: I0121 13:11:21.972054 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtncs\" (UniqueName: \"kubernetes.io/projected/0220f7cc-761e-4995-aa56-6c543cd5a294-kube-api-access-qtncs\") pod \"redhat-marketplace-qhqcb\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.026009 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.026939 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.526909065 +0000 UTC m=+143.489939778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.042028 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.119845 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svjvm"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.129283 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.129718 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.629672616 +0000 UTC m=+143.592703149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.129981 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.130626 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.630603641 +0000 UTC m=+143.593634184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.138625 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.164477 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svjvm"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.232571 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.232970 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2bh\" (UniqueName: \"kubernetes.io/projected/536f9813-b3c2-4be9-8e98-dcc68f2498a3-kube-api-access-qh2bh\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.233031 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-catalog-content\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.233167 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-utilities\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.233402 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.733370873 +0000 UTC m=+143.696401416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.234449 4959 generic.go:334] "Generic (PLEG): container finished" podID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerID="e42180c242e4e5f98347333721fc50fcda28ddeb860f7559fcd266cc7059c9c9" exitCode=0 Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.235350 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv8w6" event={"ID":"ea3b719b-be6f-4a11-a13c-ba1bfca953a7","Type":"ContainerDied","Data":"e42180c242e4e5f98347333721fc50fcda28ddeb860f7559fcd266cc7059c9c9"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.237539 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.238142 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.238950 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.244548 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.256198 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j4l2m" event={"ID":"6153e9bd-6355-4d5c-9acc-204180e45789","Type":"ContainerStarted","Data":"73212fa98490575d4d95c93f9ecdbbe27e9b658072dc505c9145c0c9e86122c8"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.256242 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.256257 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.253772 4959 patch_prober.go:28] interesting pod/console-f9d7485db-5qxvm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.256333 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5qxvm" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.257585 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.298303 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" event={"ID":"b93bf160-39a1-43b0-a409-59b814a14258","Type":"ContainerStarted","Data":"d5376a6267253549185841dcae2ba1851f97455c5df1ddb13e4f04156a559c75"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.298354 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" event={"ID":"b93bf160-39a1-43b0-a409-59b814a14258","Type":"ContainerStarted","Data":"87482be30e8eca8ef0e6ac00349cdabdd77c4a78054ae745e775b8b31fbee8a4"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.307783 4959 patch_prober.go:28] interesting pod/downloads-7954f5f757-7pb5k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.307822 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7pb5k" podUID="a8697566-9d27-4d19-be54-2c5307ab5962" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.308170 4959 patch_prober.go:28] interesting pod/downloads-7954f5f757-7pb5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.308187 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7pb5k" podUID="a8697566-9d27-4d19-be54-2c5307ab5962" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.336791 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2bh\" (UniqueName: \"kubernetes.io/projected/536f9813-b3c2-4be9-8e98-dcc68f2498a3-kube-api-access-qh2bh\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.336857 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-catalog-content\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.336949 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.337034 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-utilities\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.338632 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-utilities\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.340294 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-catalog-content\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.340670 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.840653907 +0000 UTC m=+143.803684650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.365610 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-j4l2m" podStartSLOduration=12.365589984 podStartE2EDuration="12.365589984s" podCreationTimestamp="2026-01-21 13:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:22.362616003 +0000 UTC m=+143.325646546" watchObservedRunningTime="2026-01-21 13:11:22.365589984 +0000 UTC m=+143.328620527" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.388532 4959 generic.go:334] "Generic (PLEG): container finished" podID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerID="813a79f95bc058c3fd2ea5ebd42119289c20fe5951a07031364df7c62a81f049" exitCode=0 Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.388632 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf8h2" event={"ID":"27a81382-d955-4658-8b95-0bbbaf788ecf","Type":"ContainerDied","Data":"813a79f95bc058c3fd2ea5ebd42119289c20fe5951a07031364df7c62a81f049"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.390730 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2bh\" (UniqueName: \"kubernetes.io/projected/536f9813-b3c2-4be9-8e98-dcc68f2498a3-kube-api-access-qh2bh\") pod \"redhat-marketplace-svjvm\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.397373 4959 generic.go:334] "Generic (PLEG): container finished" podID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerID="1e993d16b50c09db983ed6ee16ab6ec07ec7dc1da90c53f2dfba997f220fdb87" exitCode=0 Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.397437 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r59pr" event={"ID":"aea8e71a-36ca-4b96-8599-18a0b725e373","Type":"ContainerDied","Data":"1e993d16b50c09db983ed6ee16ab6ec07ec7dc1da90c53f2dfba997f220fdb87"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.433179 4959 generic.go:334] "Generic (PLEG): container finished" podID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerID="e769f1e9d6b48aebc387d23028d586e79c7577fcbc4b2cd058fccd7f5ff93951" exitCode=0 Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.433265 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerDied","Data":"e769f1e9d6b48aebc387d23028d586e79c7577fcbc4b2cd058fccd7f5ff93951"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.439611 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.441191 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:11:22.941175847 +0000 UTC m=+143.904206390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.471169 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" event={"ID":"e6a4be35-59c0-429a-a499-66cb3dc85aa5","Type":"ContainerStarted","Data":"498ce46c88508726d832fdb6f0f7d4f584b8f31267cdfa829bfbe892d4722f49"} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.476788 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.502293 4959 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T13:11:21.928772129Z","Handler":null,"Name":""} Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.519566 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b7ws8" podStartSLOduration=124.519529236 podStartE2EDuration="2m4.519529236s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:22.518438286 +0000 UTC m=+143.481468829" watchObservedRunningTime="2026-01-21 13:11:22.519529236 +0000 UTC m=+143.482559779" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.541355 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: E0121 13:11:22.542770 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:11:23.04252032 +0000 UTC m=+144.005550893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xlvm4" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.545372 4959 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.545404 4959 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.583671 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.584730 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.587667 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.589513 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.600711 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.642749 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.642988 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.643035 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.655634 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.697567 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhqcb"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.744397 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.744455 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.744539 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.744607 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.753833 4959 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.753932 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.768149 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:22 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:22 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:22 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.768257 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.780530 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.822306 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svjvm"] Jan 21 13:11:22 crc kubenswrapper[4959]: W0121 13:11:22.822954 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod536f9813_b3c2_4be9_8e98_dcc68f2498a3.slice/crio-553bc947f8c28e9501e3177eaa31f23ad6a0f3224a36effa5162689b32bc3275 WatchSource:0}: Error finding container 553bc947f8c28e9501e3177eaa31f23ad6a0f3224a36effa5162689b32bc3275: Status 404 returned error can't find the container with id 553bc947f8c28e9501e3177eaa31f23ad6a0f3224a36effa5162689b32bc3275 Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.835118 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xlvm4\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.868298 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.909181 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-72ps5"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.911802 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.916240 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.917288 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72ps5"] Jan 21 13:11:22 crc kubenswrapper[4959]: I0121 13:11:22.940550 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.053849 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzf5\" (UniqueName: \"kubernetes.io/projected/8899f354-3f43-4222-88a9-221ca1a6dc6e-kube-api-access-tgzf5\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.054193 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-utilities\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.054351 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-catalog-content\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.158755 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-utilities\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.159304 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-catalog-content\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.159353 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzf5\" (UniqueName: \"kubernetes.io/projected/8899f354-3f43-4222-88a9-221ca1a6dc6e-kube-api-access-tgzf5\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.160637 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-catalog-content\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.160878 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-utilities\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.204177 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzf5\" (UniqueName: \"kubernetes.io/projected/8899f354-3f43-4222-88a9-221ca1a6dc6e-kube-api-access-tgzf5\") pod \"redhat-operators-72ps5\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.232679 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.248051 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.427774 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.429166 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xlvm4"] Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.429201 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7xxn"] Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.430520 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7xxn"] Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.430641 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: W0121 13:11:23.436583 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod357ddb99_f95c_40fa_9b74_1e0fa56f10b6.slice/crio-50b64db3dc11bc538a73d3bb5aa7068f5ee91c39368761702825d4d0c1947d2a WatchSource:0}: Error finding container 50b64db3dc11bc538a73d3bb5aa7068f5ee91c39368761702825d4d0c1947d2a: Status 404 returned error can't find the container with id 50b64db3dc11bc538a73d3bb5aa7068f5ee91c39368761702825d4d0c1947d2a Jan 21 13:11:23 crc kubenswrapper[4959]: W0121 13:11:23.438319 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d74ebb8_a165_44d5_a5cf_17217e03be90.slice/crio-0623eea71e7e19b6ecacad0541366b68c4cd41f180244e47b45fd84625d48515 WatchSource:0}: Error finding container 0623eea71e7e19b6ecacad0541366b68c4cd41f180244e47b45fd84625d48515: Status 404 returned error can't find the container with id 0623eea71e7e19b6ecacad0541366b68c4cd41f180244e47b45fd84625d48515 Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.476592 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhqcb" event={"ID":"0220f7cc-761e-4995-aa56-6c543cd5a294","Type":"ContainerStarted","Data":"d77f219b521d3dbcf56882168c812dd6b036a83ea0913c3a04bde7f504fb21c0"} Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.477678 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"357ddb99-f95c-40fa-9b74-1e0fa56f10b6","Type":"ContainerStarted","Data":"50b64db3dc11bc538a73d3bb5aa7068f5ee91c39368761702825d4d0c1947d2a"} Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.478732 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" event={"ID":"4d74ebb8-a165-44d5-a5cf-17217e03be90","Type":"ContainerStarted","Data":"0623eea71e7e19b6ecacad0541366b68c4cd41f180244e47b45fd84625d48515"} Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.482252 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" event={"ID":"b93bf160-39a1-43b0-a409-59b814a14258","Type":"ContainerStarted","Data":"af9b32df195de5e3cefbd25681b48c97929e20450f389ea84679f9e8927621b8"} Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.484785 4959 generic.go:334] "Generic (PLEG): container finished" podID="ab04d280-8b58-44e7-a789-f706b8c5f807" containerID="021f2ec969185f1f287f423330c4925953e7ff57f06fe920de426b108ad01138" exitCode=0 Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.484862 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" event={"ID":"ab04d280-8b58-44e7-a789-f706b8c5f807","Type":"ContainerDied","Data":"021f2ec969185f1f287f423330c4925953e7ff57f06fe920de426b108ad01138"} Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.486692 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svjvm" event={"ID":"536f9813-b3c2-4be9-8e98-dcc68f2498a3","Type":"ContainerStarted","Data":"553bc947f8c28e9501e3177eaa31f23ad6a0f3224a36effa5162689b32bc3275"} Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.492110 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-q5vmq" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.565253 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-utilities\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.565689 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-catalog-content\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.565829 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwkj\" (UniqueName: \"kubernetes.io/projected/fd12d219-aabd-430a-8567-e21c1674bbbf-kube-api-access-fkwkj\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.666693 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwkj\" (UniqueName: \"kubernetes.io/projected/fd12d219-aabd-430a-8567-e21c1674bbbf-kube-api-access-fkwkj\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.666801 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-utilities\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.666861 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-catalog-content\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.667939 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-catalog-content\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.668001 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-utilities\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.672802 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-72ps5"] Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.683418 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.683459 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.690758 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.697598 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwkj\" (UniqueName: \"kubernetes.io/projected/fd12d219-aabd-430a-8567-e21c1674bbbf-kube-api-access-fkwkj\") pod \"redhat-operators-n7xxn\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.745983 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h5nps" podStartSLOduration=13.745960939 podStartE2EDuration="13.745960939s" podCreationTimestamp="2026-01-21 13:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:23.727536089 +0000 UTC m=+144.690566652" watchObservedRunningTime="2026-01-21 13:11:23.745960939 +0000 UTC m=+144.708991482" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.760629 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.767377 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:23 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:23 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:23 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.767445 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.835019 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:11:23 crc kubenswrapper[4959]: I0121 13:11:23.858534 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.238133 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7xxn"] Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.497957 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"357ddb99-f95c-40fa-9b74-1e0fa56f10b6","Type":"ContainerStarted","Data":"071518cbd278725512f1bd1aa703c8a56f7902ed34fa96f7c7deabaa0c874602"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.501831 4959 generic.go:334] "Generic (PLEG): container finished" podID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerID="8b00b45fe596cb20e8df32d2bf64e2c3d6780af24ee41e9cde638896d4713a19" exitCode=0 Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.501864 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerDied","Data":"8b00b45fe596cb20e8df32d2bf64e2c3d6780af24ee41e9cde638896d4713a19"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.501928 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerStarted","Data":"03c305726eaa9dc01299479238e6f863d82a0905e047d359300d9cbed67a0078"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.503858 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" event={"ID":"4d74ebb8-a165-44d5-a5cf-17217e03be90","Type":"ContainerStarted","Data":"9f0ffc6e8564a2c38afb4b5ca60ba7055ad367337a5b758b80c48a5f4dfebd74"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.503995 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.505015 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerStarted","Data":"f95bdfefb7b26c80ec5aba3b28e21796b652710ff882721c63720450d7ed8389"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.506563 4959 generic.go:334] "Generic (PLEG): container finished" podID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerID="1543bfb782100cbdb2d12508ceecf821e52d529e8197c21b6fb735a315881d89" exitCode=0 Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.506676 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svjvm" event={"ID":"536f9813-b3c2-4be9-8e98-dcc68f2498a3","Type":"ContainerDied","Data":"1543bfb782100cbdb2d12508ceecf821e52d529e8197c21b6fb735a315881d89"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.514031 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.514015802 podStartE2EDuration="2.514015802s" podCreationTimestamp="2026-01-21 13:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:24.511775171 +0000 UTC m=+145.474805724" watchObservedRunningTime="2026-01-21 13:11:24.514015802 +0000 UTC m=+145.477046345" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.514198 4959 generic.go:334] "Generic (PLEG): container finished" podID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerID="b389f490a63a4632d398622a5b833fd036b8f1775631635c4d51a624ccb8f7d5" exitCode=0 Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.514245 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhqcb" event={"ID":"0220f7cc-761e-4995-aa56-6c543cd5a294","Type":"ContainerDied","Data":"b389f490a63a4632d398622a5b833fd036b8f1775631635c4d51a624ccb8f7d5"} Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.524219 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9bt5m" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.559197 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" podStartSLOduration=126.559160088 podStartE2EDuration="2m6.559160088s" podCreationTimestamp="2026-01-21 13:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:24.557928455 +0000 UTC m=+145.520958998" watchObservedRunningTime="2026-01-21 13:11:24.559160088 +0000 UTC m=+145.522190631" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.772389 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:24 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:24 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:24 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.772734 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.781868 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.886739 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w79n\" (UniqueName: \"kubernetes.io/projected/ab04d280-8b58-44e7-a789-f706b8c5f807-kube-api-access-5w79n\") pod \"ab04d280-8b58-44e7-a789-f706b8c5f807\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.886878 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab04d280-8b58-44e7-a789-f706b8c5f807-secret-volume\") pod \"ab04d280-8b58-44e7-a789-f706b8c5f807\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.886927 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab04d280-8b58-44e7-a789-f706b8c5f807-config-volume\") pod \"ab04d280-8b58-44e7-a789-f706b8c5f807\" (UID: \"ab04d280-8b58-44e7-a789-f706b8c5f807\") " Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.887942 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab04d280-8b58-44e7-a789-f706b8c5f807-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab04d280-8b58-44e7-a789-f706b8c5f807" (UID: "ab04d280-8b58-44e7-a789-f706b8c5f807"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.902477 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab04d280-8b58-44e7-a789-f706b8c5f807-kube-api-access-5w79n" (OuterVolumeSpecName: "kube-api-access-5w79n") pod "ab04d280-8b58-44e7-a789-f706b8c5f807" (UID: "ab04d280-8b58-44e7-a789-f706b8c5f807"). InnerVolumeSpecName "kube-api-access-5w79n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.904737 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab04d280-8b58-44e7-a789-f706b8c5f807-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab04d280-8b58-44e7-a789-f706b8c5f807" (UID: "ab04d280-8b58-44e7-a789-f706b8c5f807"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.988362 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w79n\" (UniqueName: \"kubernetes.io/projected/ab04d280-8b58-44e7-a789-f706b8c5f807-kube-api-access-5w79n\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.988407 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab04d280-8b58-44e7-a789-f706b8c5f807-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:24 crc kubenswrapper[4959]: I0121 13:11:24.988420 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab04d280-8b58-44e7-a789-f706b8c5f807-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.275473 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.292871 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.293841 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.397579 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.398194 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.398227 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.413883 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.418546 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.433352 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.438357 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.521074 4959 generic.go:334] "Generic (PLEG): container finished" podID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerID="600d243c822fef9a85967245f15ffcb801cf1557246d65197ab895d97be2f732" exitCode=0 Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.521172 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerDied","Data":"600d243c822fef9a85967245f15ffcb801cf1557246d65197ab895d97be2f732"} Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.533070 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" event={"ID":"ab04d280-8b58-44e7-a789-f706b8c5f807","Type":"ContainerDied","Data":"df3af7a47a4834cce98f7e316d2c29477f3aba74507040c7f7d081efcdd8d572"} Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.533129 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3af7a47a4834cce98f7e316d2c29477f3aba74507040c7f7d081efcdd8d572" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.533219 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.550911 4959 generic.go:334] "Generic (PLEG): container finished" podID="357ddb99-f95c-40fa-9b74-1e0fa56f10b6" containerID="071518cbd278725512f1bd1aa703c8a56f7902ed34fa96f7c7deabaa0c874602" exitCode=0 Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.551537 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"357ddb99-f95c-40fa-9b74-1e0fa56f10b6","Type":"ContainerDied","Data":"071518cbd278725512f1bd1aa703c8a56f7902ed34fa96f7c7deabaa0c874602"} Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.611195 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.711568 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.771023 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:25 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:25 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:25 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:25 crc kubenswrapper[4959]: I0121 13:11:25.771107 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:26 crc kubenswrapper[4959]: W0121 13:11:26.169250 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-7dc55b04b7771a4de4d1e65b17d6d3127e4ff50f00f3f64c1986c4759d070f12 WatchSource:0}: Error finding container 7dc55b04b7771a4de4d1e65b17d6d3127e4ff50f00f3f64c1986c4759d070f12: Status 404 returned error can't find the container with id 7dc55b04b7771a4de4d1e65b17d6d3127e4ff50f00f3f64c1986c4759d070f12 Jan 21 13:11:26 crc kubenswrapper[4959]: W0121 13:11:26.287649 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-05b1eec9b2202aedff5eefa4a0f1c5bf35127cd559eb81ca3ad485bfc70bdbc1 WatchSource:0}: Error finding container 05b1eec9b2202aedff5eefa4a0f1c5bf35127cd559eb81ca3ad485bfc70bdbc1: Status 404 returned error can't find the container with id 05b1eec9b2202aedff5eefa4a0f1c5bf35127cd559eb81ca3ad485bfc70bdbc1 Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.599464 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7dc55b04b7771a4de4d1e65b17d6d3127e4ff50f00f3f64c1986c4759d070f12"} Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.600876 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"05b1eec9b2202aedff5eefa4a0f1c5bf35127cd559eb81ca3ad485bfc70bdbc1"} Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.615371 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b1677d2e6e6649d23d2332df5cf0ce144ea89571fda866cf2133d1e6dad77c67"} Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.615429 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"696d4b896fbad55cc9521cab5c25cc2d87058160fd502a907c542b1b4936b42c"} Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.615950 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.766256 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:26 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:26 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:26 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.766681 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.837211 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 13:11:26 crc kubenswrapper[4959]: E0121 13:11:26.837575 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab04d280-8b58-44e7-a789-f706b8c5f807" containerName="collect-profiles" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.837590 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab04d280-8b58-44e7-a789-f706b8c5f807" containerName="collect-profiles" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.837744 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab04d280-8b58-44e7-a789-f706b8c5f807" containerName="collect-profiles" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.838286 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.840773 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.840777 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.841041 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.949352 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69be8fe2-2eeb-4308-af58-570d6afdf17b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:26 crc kubenswrapper[4959]: I0121 13:11:26.949609 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69be8fe2-2eeb-4308-af58-570d6afdf17b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.002911 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.050298 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kubelet-dir\") pod \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.050390 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kube-api-access\") pod \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\" (UID: \"357ddb99-f95c-40fa-9b74-1e0fa56f10b6\") " Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.050558 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69be8fe2-2eeb-4308-af58-570d6afdf17b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.050647 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69be8fe2-2eeb-4308-af58-570d6afdf17b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.050828 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "357ddb99-f95c-40fa-9b74-1e0fa56f10b6" (UID: "357ddb99-f95c-40fa-9b74-1e0fa56f10b6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.050889 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69be8fe2-2eeb-4308-af58-570d6afdf17b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.056801 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "357ddb99-f95c-40fa-9b74-1e0fa56f10b6" (UID: "357ddb99-f95c-40fa-9b74-1e0fa56f10b6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.069943 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69be8fe2-2eeb-4308-af58-570d6afdf17b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.152986 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.153032 4959 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/357ddb99-f95c-40fa-9b74-1e0fa56f10b6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.207842 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.578073 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 13:11:27 crc kubenswrapper[4959]: W0121 13:11:27.617753 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod69be8fe2_2eeb_4308_af58_570d6afdf17b.slice/crio-524b12b9f317cb380adabe9b5f9159ffc5740628070725aa24edb1dffe041140 WatchSource:0}: Error finding container 524b12b9f317cb380adabe9b5f9159ffc5740628070725aa24edb1dffe041140: Status 404 returned error can't find the container with id 524b12b9f317cb380adabe9b5f9159ffc5740628070725aa24edb1dffe041140 Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.635080 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d1d2c3eda7e92ec3e268044e17f6442e75cbdbe3df6b9f453bab807ca55b54db"} Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.639892 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"357ddb99-f95c-40fa-9b74-1e0fa56f10b6","Type":"ContainerDied","Data":"50b64db3dc11bc538a73d3bb5aa7068f5ee91c39368761702825d4d0c1947d2a"} Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.639929 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b64db3dc11bc538a73d3bb5aa7068f5ee91c39368761702825d4d0c1947d2a" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.640015 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.643121 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1ad596a8bdc0839c57bdee3269c94312a98d28f25eb57dd62325447ed58a21a1"} Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.765595 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:27 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:27 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:27 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:27 crc kubenswrapper[4959]: I0121 13:11:27.765672 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:28 crc kubenswrapper[4959]: I0121 13:11:28.662895 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69be8fe2-2eeb-4308-af58-570d6afdf17b","Type":"ContainerStarted","Data":"9b9cc2a2755efb2499e969245a7032d3ec4d4c98d8c42fa123f12b2f85dd731d"} Jan 21 13:11:28 crc kubenswrapper[4959]: I0121 13:11:28.663436 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69be8fe2-2eeb-4308-af58-570d6afdf17b","Type":"ContainerStarted","Data":"524b12b9f317cb380adabe9b5f9159ffc5740628070725aa24edb1dffe041140"} Jan 21 13:11:28 crc kubenswrapper[4959]: I0121 13:11:28.682626 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.682609132 podStartE2EDuration="2.682609132s" podCreationTimestamp="2026-01-21 13:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:11:28.682061567 +0000 UTC m=+149.645092110" watchObservedRunningTime="2026-01-21 13:11:28.682609132 +0000 UTC m=+149.645639675" Jan 21 13:11:28 crc kubenswrapper[4959]: I0121 13:11:28.764393 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:28 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:28 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:28 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:28 crc kubenswrapper[4959]: I0121 13:11:28.764666 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:29 crc kubenswrapper[4959]: I0121 13:11:29.131575 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-j4l2m" Jan 21 13:11:29 crc kubenswrapper[4959]: I0121 13:11:29.696889 4959 generic.go:334] "Generic (PLEG): container finished" podID="69be8fe2-2eeb-4308-af58-570d6afdf17b" containerID="9b9cc2a2755efb2499e969245a7032d3ec4d4c98d8c42fa123f12b2f85dd731d" exitCode=0 Jan 21 13:11:29 crc kubenswrapper[4959]: I0121 13:11:29.696958 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69be8fe2-2eeb-4308-af58-570d6afdf17b","Type":"ContainerDied","Data":"9b9cc2a2755efb2499e969245a7032d3ec4d4c98d8c42fa123f12b2f85dd731d"} Jan 21 13:11:29 crc kubenswrapper[4959]: I0121 13:11:29.764219 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:29 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:29 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:29 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:29 crc kubenswrapper[4959]: I0121 13:11:29.764279 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:30 crc kubenswrapper[4959]: I0121 13:11:30.764366 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:30 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:30 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:30 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:30 crc kubenswrapper[4959]: I0121 13:11:30.765625 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.027343 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.123671 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69be8fe2-2eeb-4308-af58-570d6afdf17b-kubelet-dir\") pod \"69be8fe2-2eeb-4308-af58-570d6afdf17b\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.123798 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69be8fe2-2eeb-4308-af58-570d6afdf17b-kube-api-access\") pod \"69be8fe2-2eeb-4308-af58-570d6afdf17b\" (UID: \"69be8fe2-2eeb-4308-af58-570d6afdf17b\") " Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.124162 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be8fe2-2eeb-4308-af58-570d6afdf17b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69be8fe2-2eeb-4308-af58-570d6afdf17b" (UID: "69be8fe2-2eeb-4308-af58-570d6afdf17b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.133211 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69be8fe2-2eeb-4308-af58-570d6afdf17b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69be8fe2-2eeb-4308-af58-570d6afdf17b" (UID: "69be8fe2-2eeb-4308-af58-570d6afdf17b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.226230 4959 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69be8fe2-2eeb-4308-af58-570d6afdf17b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.226270 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69be8fe2-2eeb-4308-af58-570d6afdf17b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.712215 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69be8fe2-2eeb-4308-af58-570d6afdf17b","Type":"ContainerDied","Data":"524b12b9f317cb380adabe9b5f9159ffc5740628070725aa24edb1dffe041140"} Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.712257 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="524b12b9f317cb380adabe9b5f9159ffc5740628070725aa24edb1dffe041140" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.712279 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.763125 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:31 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:31 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:31 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:31 crc kubenswrapper[4959]: I0121 13:11:31.763200 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:32 crc kubenswrapper[4959]: I0121 13:11:32.245278 4959 patch_prober.go:28] interesting pod/console-f9d7485db-5qxvm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 21 13:11:32 crc kubenswrapper[4959]: I0121 13:11:32.245594 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5qxvm" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 21 13:11:32 crc kubenswrapper[4959]: I0121 13:11:32.317700 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7pb5k" Jan 21 13:11:32 crc kubenswrapper[4959]: I0121 13:11:32.763286 4959 patch_prober.go:28] interesting pod/router-default-5444994796-hn2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:11:32 crc kubenswrapper[4959]: [-]has-synced failed: reason withheld Jan 21 13:11:32 crc kubenswrapper[4959]: [+]process-running ok Jan 21 13:11:32 crc kubenswrapper[4959]: healthz check failed Jan 21 13:11:32 crc kubenswrapper[4959]: I0121 13:11:32.763352 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hn2rm" podUID="f817e91b-c6c9-4fa8-b73f-743cf9ed97b3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:11:33 crc kubenswrapper[4959]: I0121 13:11:33.764193 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:33 crc kubenswrapper[4959]: I0121 13:11:33.767773 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hn2rm" Jan 21 13:11:34 crc kubenswrapper[4959]: I0121 13:11:34.717592 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d8t58"] Jan 21 13:11:34 crc kubenswrapper[4959]: I0121 13:11:34.718378 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" containerID="cri-o://181ad4258fb81e2c46b75a5448a9838f792b353d4faa15e6beb70bc23c0427fe" gracePeriod=30 Jan 21 13:11:34 crc kubenswrapper[4959]: I0121 13:11:34.724000 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd"] Jan 21 13:11:34 crc kubenswrapper[4959]: I0121 13:11:34.724282 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" containerID="cri-o://91790559c510c741338728f5c91334b227f5b41164a54649e5fd252316daafae" gracePeriod=30 Jan 21 13:11:35 crc kubenswrapper[4959]: I0121 13:11:35.738501 4959 generic.go:334] "Generic (PLEG): container finished" podID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerID="91790559c510c741338728f5c91334b227f5b41164a54649e5fd252316daafae" exitCode=0 Jan 21 13:11:35 crc kubenswrapper[4959]: I0121 13:11:35.738566 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" event={"ID":"fc248d84-5152-4675-9b2b-596ba0b2dc7c","Type":"ContainerDied","Data":"91790559c510c741338728f5c91334b227f5b41164a54649e5fd252316daafae"} Jan 21 13:11:35 crc kubenswrapper[4959]: I0121 13:11:35.742673 4959 generic.go:334] "Generic (PLEG): container finished" podID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerID="181ad4258fb81e2c46b75a5448a9838f792b353d4faa15e6beb70bc23c0427fe" exitCode=0 Jan 21 13:11:35 crc kubenswrapper[4959]: I0121 13:11:35.742702 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" event={"ID":"a3221620-4989-4bff-8cfc-19da6a21a2da","Type":"ContainerDied","Data":"181ad4258fb81e2c46b75a5448a9838f792b353d4faa15e6beb70bc23c0427fe"} Jan 21 13:11:40 crc kubenswrapper[4959]: I0121 13:11:40.059637 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:40 crc kubenswrapper[4959]: I0121 13:11:40.076560 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585-metrics-certs\") pod \"network-metrics-daemon-6mzgn\" (UID: \"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585\") " pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:40 crc kubenswrapper[4959]: I0121 13:11:40.102416 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6mzgn" Jan 21 13:11:42 crc kubenswrapper[4959]: I0121 13:11:42.253772 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:42 crc kubenswrapper[4959]: I0121 13:11:42.259529 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:11:42 crc kubenswrapper[4959]: I0121 13:11:42.530165 4959 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hl7gd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 13:11:42 crc kubenswrapper[4959]: I0121 13:11:42.530224 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 13:11:42 crc kubenswrapper[4959]: I0121 13:11:42.874325 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:11:43 crc kubenswrapper[4959]: I0121 13:11:43.375907 4959 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d8t58 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 21 13:11:43 crc kubenswrapper[4959]: I0121 13:11:43.377063 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 21 13:11:51 crc kubenswrapper[4959]: I0121 13:11:51.379446 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:51 crc kubenswrapper[4959]: I0121 13:11:51.379866 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:11:52 crc kubenswrapper[4959]: I0121 13:11:52.535278 4959 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hl7gd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 13:11:52 crc kubenswrapper[4959]: I0121 13:11:52.535332 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 13:11:53 crc kubenswrapper[4959]: I0121 13:11:53.376118 4959 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d8t58 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 21 13:11:53 crc kubenswrapper[4959]: I0121 13:11:53.376187 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 21 13:11:54 crc kubenswrapper[4959]: I0121 13:11:54.063776 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dghzw" Jan 21 13:11:58 crc kubenswrapper[4959]: E0121 13:11:58.315013 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 13:11:58 crc kubenswrapper[4959]: E0121 13:11:58.315576 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2bhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vf8h2_openshift-marketplace(27a81382-d955-4658-8b95-0bbbaf788ecf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:11:58 crc kubenswrapper[4959]: E0121 13:11:58.316791 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vf8h2" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.219112 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 13:12:02 crc kubenswrapper[4959]: E0121 13:12:02.220798 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69be8fe2-2eeb-4308-af58-570d6afdf17b" containerName="pruner" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.220836 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="69be8fe2-2eeb-4308-af58-570d6afdf17b" containerName="pruner" Jan 21 13:12:02 crc kubenswrapper[4959]: E0121 13:12:02.220855 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357ddb99-f95c-40fa-9b74-1e0fa56f10b6" containerName="pruner" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.220861 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="357ddb99-f95c-40fa-9b74-1e0fa56f10b6" containerName="pruner" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.220951 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="69be8fe2-2eeb-4308-af58-570d6afdf17b" containerName="pruner" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.220963 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="357ddb99-f95c-40fa-9b74-1e0fa56f10b6" containerName="pruner" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.222867 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.228146 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.228274 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.231756 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.380187 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.380274 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.482120 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.482183 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.482343 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.512382 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: I0121 13:12:02.545505 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:02 crc kubenswrapper[4959]: E0121 13:12:02.601707 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vf8h2" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" Jan 21 13:12:02 crc kubenswrapper[4959]: E0121 13:12:02.674810 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 13:12:02 crc kubenswrapper[4959]: E0121 13:12:02.675004 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkwkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n7xxn_openshift-marketplace(fd12d219-aabd-430a-8567-e21c1674bbbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:02 crc kubenswrapper[4959]: E0121 13:12:02.677223 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n7xxn" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" Jan 21 13:12:03 crc kubenswrapper[4959]: I0121 13:12:03.530757 4959 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hl7gd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:12:03 crc kubenswrapper[4959]: I0121 13:12:03.530830 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.852076 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n7xxn" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.928906 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.929072 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh2bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-svjvm_openshift-marketplace(536f9813-b3c2-4be9-8e98-dcc68f2498a3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.931573 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-svjvm" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.949423 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.949567 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrbkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x66q8_openshift-marketplace(c01bf13b-8ada-46de-a969-cb5691c8d1c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:03 crc kubenswrapper[4959]: E0121 13:12:03.952209 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x66q8" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" Jan 21 13:12:04 crc kubenswrapper[4959]: I0121 13:12:04.376873 4959 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d8t58 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:12:04 crc kubenswrapper[4959]: I0121 13:12:04.376935 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.229876 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x66q8" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.229911 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-svjvm" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.302126 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.311383 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.311554 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r59pr_openshift-marketplace(aea8e71a-36ca-4b96-8599-18a0b725e373): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.313486 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-r59pr" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.315630 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.315780 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtncs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qhqcb_openshift-marketplace(0220f7cc-761e-4995-aa56-6c543cd5a294): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.316971 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qhqcb" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.334976 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.344221 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt"] Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.344504 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.344520 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.344530 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.344540 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.344664 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" containerName="route-controller-manager" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.344683 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" containerName="controller-manager" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.345130 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.355932 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt"] Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.370060 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.370244 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgzf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-72ps5_openshift-marketplace(8899f354-3f43-4222-88a9-221ca1a6dc6e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.372316 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-72ps5" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.390731 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.390941 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vpg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tv8w6_openshift-marketplace(ea3b719b-be6f-4a11-a13c-ba1bfca953a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.396214 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tv8w6" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.424704 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428335 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc248d84-5152-4675-9b2b-596ba0b2dc7c-serving-cert\") pod \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428417 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles\") pod \"a3221620-4989-4bff-8cfc-19da6a21a2da\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428451 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-client-ca\") pod \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428490 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzgsr\" (UniqueName: \"kubernetes.io/projected/a3221620-4989-4bff-8cfc-19da6a21a2da-kube-api-access-xzgsr\") pod \"a3221620-4989-4bff-8cfc-19da6a21a2da\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428525 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-config\") pod \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428567 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca\") pod \"a3221620-4989-4bff-8cfc-19da6a21a2da\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428645 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tz97\" (UniqueName: \"kubernetes.io/projected/fc248d84-5152-4675-9b2b-596ba0b2dc7c-kube-api-access-5tz97\") pod \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\" (UID: \"fc248d84-5152-4675-9b2b-596ba0b2dc7c\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428662 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-config\") pod \"a3221620-4989-4bff-8cfc-19da6a21a2da\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428707 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert\") pod \"a3221620-4989-4bff-8cfc-19da6a21a2da\" (UID: \"a3221620-4989-4bff-8cfc-19da6a21a2da\") " Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.428973 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-config\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.429006 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a67b56-220f-4248-8b7a-bacf2caa0400-serving-cert\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.429059 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4ws\" (UniqueName: \"kubernetes.io/projected/c5a67b56-220f-4248-8b7a-bacf2caa0400-kube-api-access-9v4ws\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.429086 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-client-ca\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.430084 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-config" (OuterVolumeSpecName: "config") pod "fc248d84-5152-4675-9b2b-596ba0b2dc7c" (UID: "fc248d84-5152-4675-9b2b-596ba0b2dc7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.431129 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3221620-4989-4bff-8cfc-19da6a21a2da" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.431152 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc248d84-5152-4675-9b2b-596ba0b2dc7c" (UID: "fc248d84-5152-4675-9b2b-596ba0b2dc7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.431494 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-config" (OuterVolumeSpecName: "config") pod "a3221620-4989-4bff-8cfc-19da6a21a2da" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.431866 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a3221620-4989-4bff-8cfc-19da6a21a2da" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.436329 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3221620-4989-4bff-8cfc-19da6a21a2da" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.437636 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc248d84-5152-4675-9b2b-596ba0b2dc7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc248d84-5152-4675-9b2b-596ba0b2dc7c" (UID: "fc248d84-5152-4675-9b2b-596ba0b2dc7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.452453 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc248d84-5152-4675-9b2b-596ba0b2dc7c-kube-api-access-5tz97" (OuterVolumeSpecName: "kube-api-access-5tz97") pod "fc248d84-5152-4675-9b2b-596ba0b2dc7c" (UID: "fc248d84-5152-4675-9b2b-596ba0b2dc7c"). InnerVolumeSpecName "kube-api-access-5tz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.455773 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3221620-4989-4bff-8cfc-19da6a21a2da-kube-api-access-xzgsr" (OuterVolumeSpecName: "kube-api-access-xzgsr") pod "a3221620-4989-4bff-8cfc-19da6a21a2da" (UID: "a3221620-4989-4bff-8cfc-19da6a21a2da"). InnerVolumeSpecName "kube-api-access-xzgsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.495937 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6mzgn"] Jan 21 13:12:05 crc kubenswrapper[4959]: W0121 13:12:05.516483 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2af1d4ef_d00b_4bf6_b2ff_77d30f5f5585.slice/crio-6ed6bc2c38e68de8df326dff1e5241a44ea0de70a00d2577dd2aa73e97482404 WatchSource:0}: Error finding container 6ed6bc2c38e68de8df326dff1e5241a44ea0de70a00d2577dd2aa73e97482404: Status 404 returned error can't find the container with id 6ed6bc2c38e68de8df326dff1e5241a44ea0de70a00d2577dd2aa73e97482404 Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531002 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-config\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531123 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a67b56-220f-4248-8b7a-bacf2caa0400-serving-cert\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531175 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4ws\" (UniqueName: \"kubernetes.io/projected/c5a67b56-220f-4248-8b7a-bacf2caa0400-kube-api-access-9v4ws\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531294 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-client-ca\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531355 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531393 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tz97\" (UniqueName: \"kubernetes.io/projected/fc248d84-5152-4675-9b2b-596ba0b2dc7c-kube-api-access-5tz97\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531403 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531412 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3221620-4989-4bff-8cfc-19da6a21a2da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531420 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc248d84-5152-4675-9b2b-596ba0b2dc7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531428 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3221620-4989-4bff-8cfc-19da6a21a2da-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531438 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531446 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzgsr\" (UniqueName: \"kubernetes.io/projected/a3221620-4989-4bff-8cfc-19da6a21a2da-kube-api-access-xzgsr\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.531455 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc248d84-5152-4675-9b2b-596ba0b2dc7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.532302 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-client-ca\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.533122 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.533953 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-config\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.536963 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a67b56-220f-4248-8b7a-bacf2caa0400-serving-cert\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.554752 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4ws\" (UniqueName: \"kubernetes.io/projected/c5a67b56-220f-4248-8b7a-bacf2caa0400-kube-api-access-9v4ws\") pod \"route-controller-manager-6cb9849c86-428lt\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.689179 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.876561 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt"] Jan 21 13:12:05 crc kubenswrapper[4959]: W0121 13:12:05.884194 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a67b56_220f_4248_8b7a_bacf2caa0400.slice/crio-ffb78143041ef7bba3ed22ae7964ac0cd61573c7101a492f2056d72225536993 WatchSource:0}: Error finding container ffb78143041ef7bba3ed22ae7964ac0cd61573c7101a492f2056d72225536993: Status 404 returned error can't find the container with id ffb78143041ef7bba3ed22ae7964ac0cd61573c7101a492f2056d72225536993 Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.928216 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" event={"ID":"a3221620-4989-4bff-8cfc-19da6a21a2da","Type":"ContainerDied","Data":"7155ac0d101e875b14a32c870333b0369601e6735edd65a4ce993e5157acfdc2"} Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.928474 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d8t58" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.928616 4959 scope.go:117] "RemoveContainer" containerID="181ad4258fb81e2c46b75a5448a9838f792b353d4faa15e6beb70bc23c0427fe" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.931605 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78","Type":"ContainerStarted","Data":"6520f077278c8e673c4bdabb7374205484aaaf530ce6e7c3263e0380d7a8661d"} Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.931655 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78","Type":"ContainerStarted","Data":"ac12bbfc8938afc112f9fa1000d515063cf98b38bff870515e5a84c2d09cec9e"} Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.937855 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" event={"ID":"fc248d84-5152-4675-9b2b-596ba0b2dc7c","Type":"ContainerDied","Data":"c6bf26cf2005fdb939306a0e339d92ed68701bb1f5844c5a8a8c7163fd520771"} Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.937955 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.947667 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.9475831660000003 podStartE2EDuration="3.947583166s" podCreationTimestamp="2026-01-21 13:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:05.946499836 +0000 UTC m=+186.909530399" watchObservedRunningTime="2026-01-21 13:12:05.947583166 +0000 UTC m=+186.910613709" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.951606 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" event={"ID":"c5a67b56-220f-4248-8b7a-bacf2caa0400","Type":"ContainerStarted","Data":"ffb78143041ef7bba3ed22ae7964ac0cd61573c7101a492f2056d72225536993"} Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.957007 4959 scope.go:117] "RemoveContainer" containerID="91790559c510c741338728f5c91334b227f5b41164a54649e5fd252316daafae" Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.957501 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" event={"ID":"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585","Type":"ContainerStarted","Data":"606eb7d6827f324a5df79134009720602169a8b4def66039c95f6bf64bf895fc"} Jan 21 13:12:05 crc kubenswrapper[4959]: I0121 13:12:05.957541 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" event={"ID":"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585","Type":"ContainerStarted","Data":"6ed6bc2c38e68de8df326dff1e5241a44ea0de70a00d2577dd2aa73e97482404"} Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.959726 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-72ps5" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.959893 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tv8w6" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.961781 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qhqcb" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" Jan 21 13:12:05 crc kubenswrapper[4959]: E0121 13:12:05.961881 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r59pr" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.008547 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d8t58"] Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.013377 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d8t58"] Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.072000 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd"] Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.075734 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hl7gd"] Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.815654 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.816662 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.829684 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.949799 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-var-lock\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.949849 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3529cbe7-e333-4087-8d38-b1bea342f086-kube-api-access\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.949869 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.962582 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" event={"ID":"c5a67b56-220f-4248-8b7a-bacf2caa0400","Type":"ContainerStarted","Data":"3a48e5a9ce95353ebce7a98963389ec3fc8d29dd59f642707fa3b1d36ade0e3f"} Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.962896 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.964341 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6mzgn" event={"ID":"2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585","Type":"ContainerStarted","Data":"7d7fba73de8297916fc54a301dbe089d59abed6f65560ef0e39e7c345134a941"} Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.967480 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.967510 4959 generic.go:334] "Generic (PLEG): container finished" podID="3aae7c49-036c-4b20-8ed2-f5a0ffa30d78" containerID="6520f077278c8e673c4bdabb7374205484aaaf530ce6e7c3263e0380d7a8661d" exitCode=0 Jan 21 13:12:06 crc kubenswrapper[4959]: I0121 13:12:06.967525 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78","Type":"ContainerDied","Data":"6520f077278c8e673c4bdabb7374205484aaaf530ce6e7c3263e0380d7a8661d"} Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.001196 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" podStartSLOduration=13.001177254 podStartE2EDuration="13.001177254s" podCreationTimestamp="2026-01-21 13:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:06.980836922 +0000 UTC m=+187.943867475" watchObservedRunningTime="2026-01-21 13:12:07.001177254 +0000 UTC m=+187.964207807" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.010044 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6mzgn" podStartSLOduration=170.010024425 podStartE2EDuration="2m50.010024425s" podCreationTimestamp="2026-01-21 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:07.009542812 +0000 UTC m=+187.972573355" watchObservedRunningTime="2026-01-21 13:12:07.010024425 +0000 UTC m=+187.973054968" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.051739 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-var-lock\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.051785 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3529cbe7-e333-4087-8d38-b1bea342f086-kube-api-access\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.051803 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.053415 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-var-lock\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.053647 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.074254 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3529cbe7-e333-4087-8d38-b1bea342f086-kube-api-access\") pod \"installer-9-crc\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.142825 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.304486 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3221620-4989-4bff-8cfc-19da6a21a2da" path="/var/lib/kubelet/pods/a3221620-4989-4bff-8cfc-19da6a21a2da/volumes" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.305640 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc248d84-5152-4675-9b2b-596ba0b2dc7c" path="/var/lib/kubelet/pods/fc248d84-5152-4675-9b2b-596ba0b2dc7c/volumes" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.330405 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 13:12:07 crc kubenswrapper[4959]: W0121 13:12:07.346303 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3529cbe7_e333_4087_8d38_b1bea342f086.slice/crio-ca70974464cb92a6cecadfcc3568501fa397e92dc1fe16935b63204f5525377c WatchSource:0}: Error finding container ca70974464cb92a6cecadfcc3568501fa397e92dc1fe16935b63204f5525377c: Status 404 returned error can't find the container with id ca70974464cb92a6cecadfcc3568501fa397e92dc1fe16935b63204f5525377c Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.635991 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc"] Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.637672 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.644947 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.645083 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.645177 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.645251 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.645317 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.646155 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.649167 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc"] Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.654393 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.770737 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-proxy-ca-bundles\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.771117 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-config\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.771154 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hp2h\" (UniqueName: \"kubernetes.io/projected/13d43671-5ff7-4f29-850c-462b172c82a1-kube-api-access-4hp2h\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.771174 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-client-ca\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.771217 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d43671-5ff7-4f29-850c-462b172c82a1-serving-cert\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.872731 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hp2h\" (UniqueName: \"kubernetes.io/projected/13d43671-5ff7-4f29-850c-462b172c82a1-kube-api-access-4hp2h\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.872784 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-client-ca\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.872832 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d43671-5ff7-4f29-850c-462b172c82a1-serving-cert\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.872865 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-proxy-ca-bundles\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.872898 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-config\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.874868 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-config\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.875048 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-client-ca\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.876362 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-proxy-ca-bundles\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.880743 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d43671-5ff7-4f29-850c-462b172c82a1-serving-cert\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.894301 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hp2h\" (UniqueName: \"kubernetes.io/projected/13d43671-5ff7-4f29-850c-462b172c82a1-kube-api-access-4hp2h\") pod \"controller-manager-7f777c7f5f-2qvsc\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.968349 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.980530 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3529cbe7-e333-4087-8d38-b1bea342f086","Type":"ContainerStarted","Data":"a6276b3211d6205f2604b90c0ac65b3d87287874e25b9f1e5e4bf151b94fd560"} Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.980888 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3529cbe7-e333-4087-8d38-b1bea342f086","Type":"ContainerStarted","Data":"ca70974464cb92a6cecadfcc3568501fa397e92dc1fe16935b63204f5525377c"} Jan 21 13:12:07 crc kubenswrapper[4959]: I0121 13:12:07.999229 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.999210514 podStartE2EDuration="1.999210514s" podCreationTimestamp="2026-01-21 13:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:07.997270461 +0000 UTC m=+188.960301004" watchObservedRunningTime="2026-01-21 13:12:07.999210514 +0000 UTC m=+188.962241057" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.179608 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc"] Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.180790 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:08 crc kubenswrapper[4959]: W0121 13:12:08.188606 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13d43671_5ff7_4f29_850c_462b172c82a1.slice/crio-31ca73e8fbb7fe9f1647c27e34ad19140fc148f29c0cb4ba312bc12c68cb2124 WatchSource:0}: Error finding container 31ca73e8fbb7fe9f1647c27e34ad19140fc148f29c0cb4ba312bc12c68cb2124: Status 404 returned error can't find the container with id 31ca73e8fbb7fe9f1647c27e34ad19140fc148f29c0cb4ba312bc12c68cb2124 Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.277435 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kube-api-access\") pod \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.277561 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kubelet-dir\") pod \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\" (UID: \"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78\") " Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.277673 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3aae7c49-036c-4b20-8ed2-f5a0ffa30d78" (UID: "3aae7c49-036c-4b20-8ed2-f5a0ffa30d78"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.277814 4959 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.282189 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3aae7c49-036c-4b20-8ed2-f5a0ffa30d78" (UID: "3aae7c49-036c-4b20-8ed2-f5a0ffa30d78"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.378877 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aae7c49-036c-4b20-8ed2-f5a0ffa30d78-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.990820 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3aae7c49-036c-4b20-8ed2-f5a0ffa30d78","Type":"ContainerDied","Data":"ac12bbfc8938afc112f9fa1000d515063cf98b38bff870515e5a84c2d09cec9e"} Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.990870 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac12bbfc8938afc112f9fa1000d515063cf98b38bff870515e5a84c2d09cec9e" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.990834 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.992884 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" event={"ID":"13d43671-5ff7-4f29-850c-462b172c82a1","Type":"ContainerStarted","Data":"56bf09b4a828aea581cc9b9b892f219a2b6d6da268ff834a49d987975fc0662c"} Jan 21 13:12:08 crc kubenswrapper[4959]: I0121 13:12:08.992958 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" event={"ID":"13d43671-5ff7-4f29-850c-462b172c82a1","Type":"ContainerStarted","Data":"31ca73e8fbb7fe9f1647c27e34ad19140fc148f29c0cb4ba312bc12c68cb2124"} Jan 21 13:12:09 crc kubenswrapper[4959]: I0121 13:12:09.864389 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" podStartSLOduration=15.864367396 podStartE2EDuration="15.864367396s" podCreationTimestamp="2026-01-21 13:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:09.022133848 +0000 UTC m=+189.985164392" watchObservedRunningTime="2026-01-21 13:12:09.864367396 +0000 UTC m=+190.827397939" Jan 21 13:12:09 crc kubenswrapper[4959]: I0121 13:12:09.865184 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sxbb8"] Jan 21 13:12:09 crc kubenswrapper[4959]: I0121 13:12:09.997894 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:10 crc kubenswrapper[4959]: I0121 13:12:10.004522 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:17 crc kubenswrapper[4959]: I0121 13:12:17.033854 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerStarted","Data":"078f963481cd7b9e5139ad35aaaf59f8d7d5fc54b8f2aaa1530c569cbcfc946d"} Jan 21 13:12:18 crc kubenswrapper[4959]: I0121 13:12:18.041025 4959 generic.go:334] "Generic (PLEG): container finished" podID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerID="fff919f38a4d198a286622c238a805eb307e3ad8cf2c9696b9c508849e290d11" exitCode=0 Jan 21 13:12:18 crc kubenswrapper[4959]: I0121 13:12:18.041106 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svjvm" event={"ID":"536f9813-b3c2-4be9-8e98-dcc68f2498a3","Type":"ContainerDied","Data":"fff919f38a4d198a286622c238a805eb307e3ad8cf2c9696b9c508849e290d11"} Jan 21 13:12:18 crc kubenswrapper[4959]: I0121 13:12:18.045327 4959 generic.go:334] "Generic (PLEG): container finished" podID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerID="078f963481cd7b9e5139ad35aaaf59f8d7d5fc54b8f2aaa1530c569cbcfc946d" exitCode=0 Jan 21 13:12:18 crc kubenswrapper[4959]: I0121 13:12:18.045361 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerDied","Data":"078f963481cd7b9e5139ad35aaaf59f8d7d5fc54b8f2aaa1530c569cbcfc946d"} Jan 21 13:12:19 crc kubenswrapper[4959]: I0121 13:12:19.075674 4959 generic.go:334] "Generic (PLEG): container finished" podID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerID="2ae8211deb2849f3c22638a1e3cb9ffc1f436ca5612239d01d84671c455b0e65" exitCode=0 Jan 21 13:12:19 crc kubenswrapper[4959]: I0121 13:12:19.075742 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r59pr" event={"ID":"aea8e71a-36ca-4b96-8599-18a0b725e373","Type":"ContainerDied","Data":"2ae8211deb2849f3c22638a1e3cb9ffc1f436ca5612239d01d84671c455b0e65"} Jan 21 13:12:19 crc kubenswrapper[4959]: I0121 13:12:19.080384 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerStarted","Data":"c47e3b0b59d0c55ce70154c8e0c1a2ac605cfce474328cbb9ab8f972713d4985"} Jan 21 13:12:19 crc kubenswrapper[4959]: I0121 13:12:19.088584 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svjvm" event={"ID":"536f9813-b3c2-4be9-8e98-dcc68f2498a3","Type":"ContainerStarted","Data":"0353947a203cfdb1cd62ef7e4a84f1ef241195c3232f86d58972e959a8c35cfd"} Jan 21 13:12:19 crc kubenswrapper[4959]: I0121 13:12:19.139546 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svjvm" podStartSLOduration=2.809527452 podStartE2EDuration="57.139512387s" podCreationTimestamp="2026-01-21 13:11:22 +0000 UTC" firstStartedPulling="2026-01-21 13:11:24.510207748 +0000 UTC m=+145.473238291" lastFinishedPulling="2026-01-21 13:12:18.840192683 +0000 UTC m=+199.803223226" observedRunningTime="2026-01-21 13:12:19.135484777 +0000 UTC m=+200.098515340" watchObservedRunningTime="2026-01-21 13:12:19.139512387 +0000 UTC m=+200.102542940" Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.093583 4959 generic.go:334] "Generic (PLEG): container finished" podID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerID="078338b47992a74b0bdc3b98807134ddbd10a24977cf5267cf20d79c8dc1d701" exitCode=0 Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.093923 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf8h2" event={"ID":"27a81382-d955-4658-8b95-0bbbaf788ecf","Type":"ContainerDied","Data":"078338b47992a74b0bdc3b98807134ddbd10a24977cf5267cf20d79c8dc1d701"} Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.098471 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r59pr" event={"ID":"aea8e71a-36ca-4b96-8599-18a0b725e373","Type":"ContainerStarted","Data":"960f2c04dae1a7f7788b76711dd3a77c7c15bac513f8e41653ad68e8aeb61ecd"} Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.100427 4959 generic.go:334] "Generic (PLEG): container finished" podID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerID="c47e3b0b59d0c55ce70154c8e0c1a2ac605cfce474328cbb9ab8f972713d4985" exitCode=0 Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.100468 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerDied","Data":"c47e3b0b59d0c55ce70154c8e0c1a2ac605cfce474328cbb9ab8f972713d4985"} Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.104249 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerStarted","Data":"076b58a7f5470a22f46ace80bfa46d0ceaa374144f7f9bf03bb8f8224860e072"} Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.161611 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r59pr" podStartSLOduration=3.045943492 podStartE2EDuration="1m0.161594259s" podCreationTimestamp="2026-01-21 13:11:20 +0000 UTC" firstStartedPulling="2026-01-21 13:11:22.401483039 +0000 UTC m=+143.364513582" lastFinishedPulling="2026-01-21 13:12:19.517133806 +0000 UTC m=+200.480164349" observedRunningTime="2026-01-21 13:12:20.159953024 +0000 UTC m=+201.122983587" watchObservedRunningTime="2026-01-21 13:12:20.161594259 +0000 UTC m=+201.124624792" Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.179927 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7xxn" podStartSLOduration=3.81993341 podStartE2EDuration="57.179906576s" podCreationTimestamp="2026-01-21 13:11:23 +0000 UTC" firstStartedPulling="2026-01-21 13:11:25.529357801 +0000 UTC m=+146.492388344" lastFinishedPulling="2026-01-21 13:12:18.889330977 +0000 UTC m=+199.852361510" observedRunningTime="2026-01-21 13:12:20.178991131 +0000 UTC m=+201.142021664" watchObservedRunningTime="2026-01-21 13:12:20.179906576 +0000 UTC m=+201.142937119" Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.505447 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:12:20 crc kubenswrapper[4959]: I0121 13:12:20.505513 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.110676 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerStarted","Data":"41cbf7932040d1e18c3d7f8e19c134f8e8b913f4dc4f25472721988c6e3574a8"} Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.112049 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerStarted","Data":"b58974dbaad639f2ef93db5b042cbd0b949314af49d68961de8d5f5d39841099"} Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.114811 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf8h2" event={"ID":"27a81382-d955-4658-8b95-0bbbaf788ecf","Type":"ContainerStarted","Data":"1b4df3379700e3c0304f20c1a609022f43f618dbb073568f4323be5273489bea"} Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.379817 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.379876 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.379927 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.380460 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.380550 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91" gracePeriod=600 Jan 21 13:12:21 crc kubenswrapper[4959]: I0121 13:12:21.573350 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r59pr" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="registry-server" probeResult="failure" output=< Jan 21 13:12:21 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:12:21 crc kubenswrapper[4959]: > Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.122300 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91" exitCode=0 Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.122382 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91"} Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.122653 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"13642c440229a641b98715e5b86f12964d19facd8a63f7a2ba469a1067d57fdf"} Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.124298 4959 generic.go:334] "Generic (PLEG): container finished" podID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerID="b58974dbaad639f2ef93db5b042cbd0b949314af49d68961de8d5f5d39841099" exitCode=0 Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.124382 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerDied","Data":"b58974dbaad639f2ef93db5b042cbd0b949314af49d68961de8d5f5d39841099"} Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.217204 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x66q8" podStartSLOduration=5.183410606 podStartE2EDuration="1m3.217185163s" podCreationTimestamp="2026-01-21 13:11:19 +0000 UTC" firstStartedPulling="2026-01-21 13:11:22.456494243 +0000 UTC m=+143.419524786" lastFinishedPulling="2026-01-21 13:12:20.49026879 +0000 UTC m=+201.453299343" observedRunningTime="2026-01-21 13:12:22.184951898 +0000 UTC m=+203.147982441" watchObservedRunningTime="2026-01-21 13:12:22.217185163 +0000 UTC m=+203.180215706" Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.478043 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.478087 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.537818 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:12:22 crc kubenswrapper[4959]: I0121 13:12:22.557293 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vf8h2" podStartSLOduration=4.429592197 podStartE2EDuration="1m2.557271974s" podCreationTimestamp="2026-01-21 13:11:20 +0000 UTC" firstStartedPulling="2026-01-21 13:11:22.393687557 +0000 UTC m=+143.356718100" lastFinishedPulling="2026-01-21 13:12:20.521367334 +0000 UTC m=+201.484397877" observedRunningTime="2026-01-21 13:12:22.217850571 +0000 UTC m=+203.180881124" watchObservedRunningTime="2026-01-21 13:12:22.557271974 +0000 UTC m=+203.520302517" Jan 21 13:12:23 crc kubenswrapper[4959]: I0121 13:12:23.138201 4959 generic.go:334] "Generic (PLEG): container finished" podID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerID="f1bc6f24ffaa4fdd77578fd04546f77e62289a8fe0895f48d6c6d8c9b4bee1d0" exitCode=0 Jan 21 13:12:23 crc kubenswrapper[4959]: I0121 13:12:23.138299 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv8w6" event={"ID":"ea3b719b-be6f-4a11-a13c-ba1bfca953a7","Type":"ContainerDied","Data":"f1bc6f24ffaa4fdd77578fd04546f77e62289a8fe0895f48d6c6d8c9b4bee1d0"} Jan 21 13:12:23 crc kubenswrapper[4959]: I0121 13:12:23.859207 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:12:23 crc kubenswrapper[4959]: I0121 13:12:23.859869 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:12:24 crc kubenswrapper[4959]: I0121 13:12:24.903252 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7xxn" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="registry-server" probeResult="failure" output=< Jan 21 13:12:24 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:12:24 crc kubenswrapper[4959]: > Jan 21 13:12:26 crc kubenswrapper[4959]: I0121 13:12:26.155580 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerStarted","Data":"bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7"} Jan 21 13:12:26 crc kubenswrapper[4959]: I0121 13:12:26.157380 4959 generic.go:334] "Generic (PLEG): container finished" podID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerID="b835b0db62594ce54a128e491431ed322b4f0fd9531a1dc7ab1ab01f82a727cc" exitCode=0 Jan 21 13:12:26 crc kubenswrapper[4959]: I0121 13:12:26.157433 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhqcb" event={"ID":"0220f7cc-761e-4995-aa56-6c543cd5a294","Type":"ContainerDied","Data":"b835b0db62594ce54a128e491431ed322b4f0fd9531a1dc7ab1ab01f82a727cc"} Jan 21 13:12:26 crc kubenswrapper[4959]: I0121 13:12:26.177066 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-72ps5" podStartSLOduration=4.676810046 podStartE2EDuration="1m4.177043994s" podCreationTimestamp="2026-01-21 13:11:22 +0000 UTC" firstStartedPulling="2026-01-21 13:11:24.503687211 +0000 UTC m=+145.466717754" lastFinishedPulling="2026-01-21 13:12:24.003921159 +0000 UTC m=+204.966951702" observedRunningTime="2026-01-21 13:12:26.173369934 +0000 UTC m=+207.136400477" watchObservedRunningTime="2026-01-21 13:12:26.177043994 +0000 UTC m=+207.140074537" Jan 21 13:12:27 crc kubenswrapper[4959]: I0121 13:12:27.165930 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv8w6" event={"ID":"ea3b719b-be6f-4a11-a13c-ba1bfca953a7","Type":"ContainerStarted","Data":"1db44a9ed4e8dad5b13fd77972561461c97338f32aa6c92ecd13f1b29c9badf2"} Jan 21 13:12:27 crc kubenswrapper[4959]: I0121 13:12:27.185836 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tv8w6" podStartSLOduration=4.7364416590000005 podStartE2EDuration="1m8.185818523s" podCreationTimestamp="2026-01-21 13:11:19 +0000 UTC" firstStartedPulling="2026-01-21 13:11:22.238601445 +0000 UTC m=+143.201631988" lastFinishedPulling="2026-01-21 13:12:25.687978309 +0000 UTC m=+206.651008852" observedRunningTime="2026-01-21 13:12:27.182316368 +0000 UTC m=+208.145346931" watchObservedRunningTime="2026-01-21 13:12:27.185818523 +0000 UTC m=+208.148849066" Jan 21 13:12:29 crc kubenswrapper[4959]: I0121 13:12:29.180467 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhqcb" event={"ID":"0220f7cc-761e-4995-aa56-6c543cd5a294","Type":"ContainerStarted","Data":"a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36"} Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.111246 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.111707 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.157630 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.200520 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qhqcb" podStartSLOduration=7.080525276 podStartE2EDuration="1m9.200484329s" podCreationTimestamp="2026-01-21 13:11:21 +0000 UTC" firstStartedPulling="2026-01-21 13:11:24.515627346 +0000 UTC m=+145.478657889" lastFinishedPulling="2026-01-21 13:12:26.635586399 +0000 UTC m=+207.598616942" observedRunningTime="2026-01-21 13:12:30.19908039 +0000 UTC m=+211.162110933" watchObservedRunningTime="2026-01-21 13:12:30.200484329 +0000 UTC m=+211.163514882" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.547914 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.590454 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.814507 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.814581 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.818476 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.818554 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.856051 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:12:30 crc kubenswrapper[4959]: I0121 13:12:30.861974 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:12:31 crc kubenswrapper[4959]: I0121 13:12:31.225552 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:12:31 crc kubenswrapper[4959]: I0121 13:12:31.225828 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:12:32 crc kubenswrapper[4959]: I0121 13:12:32.043088 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:12:32 crc kubenswrapper[4959]: I0121 13:12:32.043190 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:12:32 crc kubenswrapper[4959]: I0121 13:12:32.078711 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:12:32 crc kubenswrapper[4959]: I0121 13:12:32.522025 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:12:32 crc kubenswrapper[4959]: I0121 13:12:32.716711 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r59pr"] Jan 21 13:12:32 crc kubenswrapper[4959]: I0121 13:12:32.716967 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r59pr" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="registry-server" containerID="cri-o://960f2c04dae1a7f7788b76711dd3a77c7c15bac513f8e41653ad68e8aeb61ecd" gracePeriod=2 Jan 21 13:12:33 crc kubenswrapper[4959]: I0121 13:12:33.233299 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:12:33 crc kubenswrapper[4959]: I0121 13:12:33.233383 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:12:33 crc kubenswrapper[4959]: I0121 13:12:33.276107 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:12:33 crc kubenswrapper[4959]: I0121 13:12:33.533485 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-svjvm" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="registry-server" probeResult="failure" output=< Jan 21 13:12:33 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:12:33 crc kubenswrapper[4959]: > Jan 21 13:12:33 crc kubenswrapper[4959]: I0121 13:12:33.914363 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:12:33 crc kubenswrapper[4959]: I0121 13:12:33.970762 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.274327 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.520700 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vf8h2"] Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.520965 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vf8h2" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="registry-server" containerID="cri-o://1b4df3379700e3c0304f20c1a609022f43f618dbb073568f4323be5273489bea" gracePeriod=2 Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.612983 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc"] Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.613221 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" podUID="13d43671-5ff7-4f29-850c-462b172c82a1" containerName="controller-manager" containerID="cri-o://56bf09b4a828aea581cc9b9b892f219a2b6d6da268ff834a49d987975fc0662c" gracePeriod=30 Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.712317 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt"] Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.712569 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" podUID="c5a67b56-220f-4248-8b7a-bacf2caa0400" containerName="route-controller-manager" containerID="cri-o://3a48e5a9ce95353ebce7a98963389ec3fc8d29dd59f642707fa3b1d36ade0e3f" gracePeriod=30 Jan 21 13:12:34 crc kubenswrapper[4959]: I0121 13:12:34.890351 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" podUID="675c0c62-9109-4128-93c9-801f66debbaf" containerName="oauth-openshift" containerID="cri-o://afa0ce1b89b63f6aaf3f543abb5c988ab62283b0c263fb3ef39a28901d74ef3a" gracePeriod=15 Jan 21 13:12:35 crc kubenswrapper[4959]: I0121 13:12:35.690233 4959 patch_prober.go:28] interesting pod/route-controller-manager-6cb9849c86-428lt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 21 13:12:35 crc kubenswrapper[4959]: I0121 13:12:35.690350 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" podUID="c5a67b56-220f-4248-8b7a-bacf2caa0400" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 21 13:12:37 crc kubenswrapper[4959]: I0121 13:12:37.200379 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svjvm"] Jan 21 13:12:37 crc kubenswrapper[4959]: I0121 13:12:37.200799 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svjvm" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="registry-server" containerID="cri-o://0353947a203cfdb1cd62ef7e4a84f1ef241195c3232f86d58972e959a8c35cfd" gracePeriod=2 Jan 21 13:12:37 crc kubenswrapper[4959]: I0121 13:12:37.969457 4959 patch_prober.go:28] interesting pod/controller-manager-7f777c7f5f-2qvsc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 21 13:12:37 crc kubenswrapper[4959]: I0121 13:12:37.969518 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" podUID="13d43671-5ff7-4f29-850c-462b172c82a1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 21 13:12:38 crc kubenswrapper[4959]: I0121 13:12:38.233223 4959 generic.go:334] "Generic (PLEG): container finished" podID="c5a67b56-220f-4248-8b7a-bacf2caa0400" containerID="3a48e5a9ce95353ebce7a98963389ec3fc8d29dd59f642707fa3b1d36ade0e3f" exitCode=0 Jan 21 13:12:38 crc kubenswrapper[4959]: I0121 13:12:38.233288 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" event={"ID":"c5a67b56-220f-4248-8b7a-bacf2caa0400","Type":"ContainerDied","Data":"3a48e5a9ce95353ebce7a98963389ec3fc8d29dd59f642707fa3b1d36ade0e3f"} Jan 21 13:12:38 crc kubenswrapper[4959]: I0121 13:12:38.751912 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7xxn"] Jan 21 13:12:38 crc kubenswrapper[4959]: I0121 13:12:38.752562 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n7xxn" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="registry-server" containerID="cri-o://076b58a7f5470a22f46ace80bfa46d0ceaa374144f7f9bf03bb8f8224860e072" gracePeriod=2 Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.240350 4959 generic.go:334] "Generic (PLEG): container finished" podID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerID="0353947a203cfdb1cd62ef7e4a84f1ef241195c3232f86d58972e959a8c35cfd" exitCode=0 Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.240392 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svjvm" event={"ID":"536f9813-b3c2-4be9-8e98-dcc68f2498a3","Type":"ContainerDied","Data":"0353947a203cfdb1cd62ef7e4a84f1ef241195c3232f86d58972e959a8c35cfd"} Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.241793 4959 generic.go:334] "Generic (PLEG): container finished" podID="675c0c62-9109-4128-93c9-801f66debbaf" containerID="afa0ce1b89b63f6aaf3f543abb5c988ab62283b0c263fb3ef39a28901d74ef3a" exitCode=0 Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.241858 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" event={"ID":"675c0c62-9109-4128-93c9-801f66debbaf","Type":"ContainerDied","Data":"afa0ce1b89b63f6aaf3f543abb5c988ab62283b0c263fb3ef39a28901d74ef3a"} Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.243139 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vf8h2_27a81382-d955-4658-8b95-0bbbaf788ecf/registry-server/0.log" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.243987 4959 generic.go:334] "Generic (PLEG): container finished" podID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerID="1b4df3379700e3c0304f20c1a609022f43f618dbb073568f4323be5273489bea" exitCode=137 Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.244245 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf8h2" event={"ID":"27a81382-d955-4658-8b95-0bbbaf788ecf","Type":"ContainerDied","Data":"1b4df3379700e3c0304f20c1a609022f43f618dbb073568f4323be5273489bea"} Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.245284 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r59pr_aea8e71a-36ca-4b96-8599-18a0b725e373/registry-server/0.log" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.245897 4959 generic.go:334] "Generic (PLEG): container finished" podID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerID="960f2c04dae1a7f7788b76711dd3a77c7c15bac513f8e41653ad68e8aeb61ecd" exitCode=137 Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.245951 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r59pr" event={"ID":"aea8e71a-36ca-4b96-8599-18a0b725e373","Type":"ContainerDied","Data":"960f2c04dae1a7f7788b76711dd3a77c7c15bac513f8e41653ad68e8aeb61ecd"} Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.247305 4959 generic.go:334] "Generic (PLEG): container finished" podID="13d43671-5ff7-4f29-850c-462b172c82a1" containerID="56bf09b4a828aea581cc9b9b892f219a2b6d6da268ff834a49d987975fc0662c" exitCode=0 Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.247339 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" event={"ID":"13d43671-5ff7-4f29-850c-462b172c82a1","Type":"ContainerDied","Data":"56bf09b4a828aea581cc9b9b892f219a2b6d6da268ff834a49d987975fc0662c"} Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.723587 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.731504 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756199 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hp2h\" (UniqueName: \"kubernetes.io/projected/13d43671-5ff7-4f29-850c-462b172c82a1-kube-api-access-4hp2h\") pod \"13d43671-5ff7-4f29-850c-462b172c82a1\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756450 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d43671-5ff7-4f29-850c-462b172c82a1-serving-cert\") pod \"13d43671-5ff7-4f29-850c-462b172c82a1\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756495 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-client-ca\") pod \"13d43671-5ff7-4f29-850c-462b172c82a1\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756552 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-client-ca\") pod \"c5a67b56-220f-4248-8b7a-bacf2caa0400\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756638 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-config\") pod \"13d43671-5ff7-4f29-850c-462b172c82a1\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756666 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v4ws\" (UniqueName: \"kubernetes.io/projected/c5a67b56-220f-4248-8b7a-bacf2caa0400-kube-api-access-9v4ws\") pod \"c5a67b56-220f-4248-8b7a-bacf2caa0400\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756717 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a67b56-220f-4248-8b7a-bacf2caa0400-serving-cert\") pod \"c5a67b56-220f-4248-8b7a-bacf2caa0400\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756739 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-config\") pod \"c5a67b56-220f-4248-8b7a-bacf2caa0400\" (UID: \"c5a67b56-220f-4248-8b7a-bacf2caa0400\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.756796 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-proxy-ca-bundles\") pod \"13d43671-5ff7-4f29-850c-462b172c82a1\" (UID: \"13d43671-5ff7-4f29-850c-462b172c82a1\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.757790 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13d43671-5ff7-4f29-850c-462b172c82a1" (UID: "13d43671-5ff7-4f29-850c-462b172c82a1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.758001 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-client-ca" (OuterVolumeSpecName: "client-ca") pod "c5a67b56-220f-4248-8b7a-bacf2caa0400" (UID: "c5a67b56-220f-4248-8b7a-bacf2caa0400"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.758131 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-config" (OuterVolumeSpecName: "config") pod "c5a67b56-220f-4248-8b7a-bacf2caa0400" (UID: "c5a67b56-220f-4248-8b7a-bacf2caa0400"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.758657 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "13d43671-5ff7-4f29-850c-462b172c82a1" (UID: "13d43671-5ff7-4f29-850c-462b172c82a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.759479 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-config" (OuterVolumeSpecName: "config") pod "13d43671-5ff7-4f29-850c-462b172c82a1" (UID: "13d43671-5ff7-4f29-850c-462b172c82a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.767299 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7675bf54db-ftnb8"] Jan 21 13:12:39 crc kubenswrapper[4959]: E0121 13:12:39.772048 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aae7c49-036c-4b20-8ed2-f5a0ffa30d78" containerName="pruner" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772118 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aae7c49-036c-4b20-8ed2-f5a0ffa30d78" containerName="pruner" Jan 21 13:12:39 crc kubenswrapper[4959]: E0121 13:12:39.772136 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d43671-5ff7-4f29-850c-462b172c82a1" containerName="controller-manager" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772148 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d43671-5ff7-4f29-850c-462b172c82a1" containerName="controller-manager" Jan 21 13:12:39 crc kubenswrapper[4959]: E0121 13:12:39.772173 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a67b56-220f-4248-8b7a-bacf2caa0400" containerName="route-controller-manager" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772185 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a67b56-220f-4248-8b7a-bacf2caa0400" containerName="route-controller-manager" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772349 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aae7c49-036c-4b20-8ed2-f5a0ffa30d78" containerName="pruner" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772366 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d43671-5ff7-4f29-850c-462b172c82a1" containerName="controller-manager" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772381 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a67b56-220f-4248-8b7a-bacf2caa0400" containerName="route-controller-manager" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.772924 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.775223 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d43671-5ff7-4f29-850c-462b172c82a1-kube-api-access-4hp2h" (OuterVolumeSpecName: "kube-api-access-4hp2h") pod "13d43671-5ff7-4f29-850c-462b172c82a1" (UID: "13d43671-5ff7-4f29-850c-462b172c82a1"). InnerVolumeSpecName "kube-api-access-4hp2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.775289 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a67b56-220f-4248-8b7a-bacf2caa0400-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5a67b56-220f-4248-8b7a-bacf2caa0400" (UID: "c5a67b56-220f-4248-8b7a-bacf2caa0400"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.775398 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d43671-5ff7-4f29-850c-462b172c82a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13d43671-5ff7-4f29-850c-462b172c82a1" (UID: "13d43671-5ff7-4f29-850c-462b172c82a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.782231 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a67b56-220f-4248-8b7a-bacf2caa0400-kube-api-access-9v4ws" (OuterVolumeSpecName: "kube-api-access-9v4ws") pod "c5a67b56-220f-4248-8b7a-bacf2caa0400" (UID: "c5a67b56-220f-4248-8b7a-bacf2caa0400"). InnerVolumeSpecName "kube-api-access-9v4ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.791631 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7675bf54db-ftnb8"] Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867267 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-config\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867331 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmlj\" (UniqueName: \"kubernetes.io/projected/e9823016-93fd-4112-b72f-f0b258e41caa-kube-api-access-dlmlj\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867387 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9823016-93fd-4112-b72f-f0b258e41caa-serving-cert\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867466 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-proxy-ca-bundles\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867500 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-client-ca\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867576 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867591 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v4ws\" (UniqueName: \"kubernetes.io/projected/c5a67b56-220f-4248-8b7a-bacf2caa0400-kube-api-access-9v4ws\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867614 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a67b56-220f-4248-8b7a-bacf2caa0400-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867626 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867638 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867650 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hp2h\" (UniqueName: \"kubernetes.io/projected/13d43671-5ff7-4f29-850c-462b172c82a1-kube-api-access-4hp2h\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867663 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13d43671-5ff7-4f29-850c-462b172c82a1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867674 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13d43671-5ff7-4f29-850c-462b172c82a1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.867685 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a67b56-220f-4248-8b7a-bacf2caa0400-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.939362 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.968833 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-serving-cert\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.968921 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-trusted-ca-bundle\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.968940 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-service-ca\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.968993 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-router-certs\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969033 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-ocp-branding-template\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969047 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-login\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969071 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/675c0c62-9109-4128-93c9-801f66debbaf-audit-dir\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969105 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4z7j\" (UniqueName: \"kubernetes.io/projected/675c0c62-9109-4128-93c9-801f66debbaf-kube-api-access-c4z7j\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969126 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-session\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969144 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-idp-0-file-data\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969163 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-cliconfig\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969184 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-audit-policies\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969200 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-error\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969220 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-provider-selection\") pod \"675c0c62-9109-4128-93c9-801f66debbaf\" (UID: \"675c0c62-9109-4128-93c9-801f66debbaf\") " Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969326 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-client-ca\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969378 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-config\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969395 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmlj\" (UniqueName: \"kubernetes.io/projected/e9823016-93fd-4112-b72f-f0b258e41caa-kube-api-access-dlmlj\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969427 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9823016-93fd-4112-b72f-f0b258e41caa-serving-cert\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.969469 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-proxy-ca-bundles\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.970666 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/675c0c62-9109-4128-93c9-801f66debbaf-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.971143 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-proxy-ca-bundles\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.971648 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-client-ca\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.972967 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-config\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.976887 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.977394 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.977593 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9823016-93fd-4112-b72f-f0b258e41caa-serving-cert\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.980043 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.984617 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.994575 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:39 crc kubenswrapper[4959]: I0121 13:12:39.995689 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:39.997745 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.003205 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.004633 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.005379 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.005682 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675c0c62-9109-4128-93c9-801f66debbaf-kube-api-access-c4z7j" (OuterVolumeSpecName: "kube-api-access-c4z7j") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "kube-api-access-c4z7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.005769 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.005889 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "675c0c62-9109-4128-93c9-801f66debbaf" (UID: "675c0c62-9109-4128-93c9-801f66debbaf"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.006775 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmlj\" (UniqueName: \"kubernetes.io/projected/e9823016-93fd-4112-b72f-f0b258e41caa-kube-api-access-dlmlj\") pod \"controller-manager-7675bf54db-ftnb8\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.021542 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r59pr_aea8e71a-36ca-4b96-8599-18a0b725e373/registry-server/0.log" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.022266 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.070840 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl2cf\" (UniqueName: \"kubernetes.io/projected/aea8e71a-36ca-4b96-8599-18a0b725e373-kube-api-access-pl2cf\") pod \"aea8e71a-36ca-4b96-8599-18a0b725e373\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.071665 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-catalog-content\") pod \"aea8e71a-36ca-4b96-8599-18a0b725e373\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.071819 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-utilities\") pod \"aea8e71a-36ca-4b96-8599-18a0b725e373\" (UID: \"aea8e71a-36ca-4b96-8599-18a0b725e373\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072321 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072370 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072386 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072398 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072410 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072440 4959 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/675c0c62-9109-4128-93c9-801f66debbaf-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072456 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4z7j\" (UniqueName: \"kubernetes.io/projected/675c0c62-9109-4128-93c9-801f66debbaf-kube-api-access-c4z7j\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072469 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072480 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072492 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072523 4959 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/675c0c62-9109-4128-93c9-801f66debbaf-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072536 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072550 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072562 4959 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/675c0c62-9109-4128-93c9-801f66debbaf-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.072975 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-utilities" (OuterVolumeSpecName: "utilities") pod "aea8e71a-36ca-4b96-8599-18a0b725e373" (UID: "aea8e71a-36ca-4b96-8599-18a0b725e373"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.083704 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea8e71a-36ca-4b96-8599-18a0b725e373-kube-api-access-pl2cf" (OuterVolumeSpecName: "kube-api-access-pl2cf") pod "aea8e71a-36ca-4b96-8599-18a0b725e373" (UID: "aea8e71a-36ca-4b96-8599-18a0b725e373"). InnerVolumeSpecName "kube-api-access-pl2cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.118732 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aea8e71a-36ca-4b96-8599-18a0b725e373" (UID: "aea8e71a-36ca-4b96-8599-18a0b725e373"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.126914 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.149893 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.173506 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl2cf\" (UniqueName: \"kubernetes.io/projected/aea8e71a-36ca-4b96-8599-18a0b725e373-kube-api-access-pl2cf\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.173534 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.173544 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea8e71a-36ca-4b96-8599-18a0b725e373-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.253570 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" event={"ID":"13d43671-5ff7-4f29-850c-462b172c82a1","Type":"ContainerDied","Data":"31ca73e8fbb7fe9f1647c27e34ad19140fc148f29c0cb4ba312bc12c68cb2124"} Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.253603 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.253623 4959 scope.go:117] "RemoveContainer" containerID="56bf09b4a828aea581cc9b9b892f219a2b6d6da268ff834a49d987975fc0662c" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.257038 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" event={"ID":"c5a67b56-220f-4248-8b7a-bacf2caa0400","Type":"ContainerDied","Data":"ffb78143041ef7bba3ed22ae7964ac0cd61573c7101a492f2056d72225536993"} Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.257069 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.263524 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" event={"ID":"675c0c62-9109-4128-93c9-801f66debbaf","Type":"ContainerDied","Data":"4a5080b5ab42d8ed8eac7922688c894f81dacbc92c9b239c883fbd1a0b770098"} Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.263557 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sxbb8" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.271776 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r59pr_aea8e71a-36ca-4b96-8599-18a0b725e373/registry-server/0.log" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.272677 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r59pr" event={"ID":"aea8e71a-36ca-4b96-8599-18a0b725e373","Type":"ContainerDied","Data":"585a200515c12f1a581d3b5ab5c691361e944aa486d8c3ff7c77aba29fc30928"} Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.272740 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r59pr" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.290875 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.299619 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f777c7f5f-2qvsc"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.303779 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.308315 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb9849c86-428lt"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.317123 4959 scope.go:117] "RemoveContainer" containerID="3a48e5a9ce95353ebce7a98963389ec3fc8d29dd59f642707fa3b1d36ade0e3f" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.325907 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r59pr"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.329316 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r59pr"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.335600 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sxbb8"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.339591 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sxbb8"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.371617 4959 scope.go:117] "RemoveContainer" containerID="afa0ce1b89b63f6aaf3f543abb5c988ab62283b0c263fb3ef39a28901d74ef3a" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.398566 4959 scope.go:117] "RemoveContainer" containerID="960f2c04dae1a7f7788b76711dd3a77c7c15bac513f8e41653ad68e8aeb61ecd" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.411268 4959 scope.go:117] "RemoveContainer" containerID="2ae8211deb2849f3c22638a1e3cb9ffc1f436ca5612239d01d84671c455b0e65" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.431424 4959 scope.go:117] "RemoveContainer" containerID="1e993d16b50c09db983ed6ee16ab6ec07ec7dc1da90c53f2dfba997f220fdb87" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.481226 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.542339 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7675bf54db-ftnb8"] Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.542548 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vf8h2_27a81382-d955-4658-8b95-0bbbaf788ecf/registry-server/0.log" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.543657 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:12:40 crc kubenswrapper[4959]: W0121 13:12:40.546151 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9823016_93fd_4112_b72f_f0b258e41caa.slice/crio-b93509ad572c6caaf82b014c4d0b317fe7a6fe48132700d72770bc687bb2c3c5 WatchSource:0}: Error finding container b93509ad572c6caaf82b014c4d0b317fe7a6fe48132700d72770bc687bb2c3c5: Status 404 returned error can't find the container with id b93509ad572c6caaf82b014c4d0b317fe7a6fe48132700d72770bc687bb2c3c5 Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.581436 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2bhl\" (UniqueName: \"kubernetes.io/projected/27a81382-d955-4658-8b95-0bbbaf788ecf-kube-api-access-g2bhl\") pod \"27a81382-d955-4658-8b95-0bbbaf788ecf\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.581528 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh2bh\" (UniqueName: \"kubernetes.io/projected/536f9813-b3c2-4be9-8e98-dcc68f2498a3-kube-api-access-qh2bh\") pod \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.581562 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-utilities\") pod \"27a81382-d955-4658-8b95-0bbbaf788ecf\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.581592 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-utilities\") pod \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.581609 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-catalog-content\") pod \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\" (UID: \"536f9813-b3c2-4be9-8e98-dcc68f2498a3\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.581657 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-catalog-content\") pod \"27a81382-d955-4658-8b95-0bbbaf788ecf\" (UID: \"27a81382-d955-4658-8b95-0bbbaf788ecf\") " Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.582593 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-utilities" (OuterVolumeSpecName: "utilities") pod "27a81382-d955-4658-8b95-0bbbaf788ecf" (UID: "27a81382-d955-4658-8b95-0bbbaf788ecf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.582930 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-utilities" (OuterVolumeSpecName: "utilities") pod "536f9813-b3c2-4be9-8e98-dcc68f2498a3" (UID: "536f9813-b3c2-4be9-8e98-dcc68f2498a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.586053 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536f9813-b3c2-4be9-8e98-dcc68f2498a3-kube-api-access-qh2bh" (OuterVolumeSpecName: "kube-api-access-qh2bh") pod "536f9813-b3c2-4be9-8e98-dcc68f2498a3" (UID: "536f9813-b3c2-4be9-8e98-dcc68f2498a3"). InnerVolumeSpecName "kube-api-access-qh2bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.589528 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a81382-d955-4658-8b95-0bbbaf788ecf-kube-api-access-g2bhl" (OuterVolumeSpecName: "kube-api-access-g2bhl") pod "27a81382-d955-4658-8b95-0bbbaf788ecf" (UID: "27a81382-d955-4658-8b95-0bbbaf788ecf"). InnerVolumeSpecName "kube-api-access-g2bhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.606522 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "536f9813-b3c2-4be9-8e98-dcc68f2498a3" (UID: "536f9813-b3c2-4be9-8e98-dcc68f2498a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.645573 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27a81382-d955-4658-8b95-0bbbaf788ecf" (UID: "27a81382-d955-4658-8b95-0bbbaf788ecf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.682756 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.682793 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2bhl\" (UniqueName: \"kubernetes.io/projected/27a81382-d955-4658-8b95-0bbbaf788ecf-kube-api-access-g2bhl\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.682804 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh2bh\" (UniqueName: \"kubernetes.io/projected/536f9813-b3c2-4be9-8e98-dcc68f2498a3-kube-api-access-qh2bh\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.682813 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27a81382-d955-4658-8b95-0bbbaf788ecf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.682822 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:40 crc kubenswrapper[4959]: I0121 13:12:40.682829 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/536f9813-b3c2-4be9-8e98-dcc68f2498a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.280283 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" event={"ID":"e9823016-93fd-4112-b72f-f0b258e41caa","Type":"ContainerStarted","Data":"b93509ad572c6caaf82b014c4d0b317fe7a6fe48132700d72770bc687bb2c3c5"} Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.282389 4959 generic.go:334] "Generic (PLEG): container finished" podID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerID="076b58a7f5470a22f46ace80bfa46d0ceaa374144f7f9bf03bb8f8224860e072" exitCode=0 Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.282481 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerDied","Data":"076b58a7f5470a22f46ace80bfa46d0ceaa374144f7f9bf03bb8f8224860e072"} Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.285671 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svjvm" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.290699 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vf8h2_27a81382-d955-4658-8b95-0bbbaf788ecf/registry-server/0.log" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.291463 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vf8h2" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.292262 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d43671-5ff7-4f29-850c-462b172c82a1" path="/var/lib/kubelet/pods/13d43671-5ff7-4f29-850c-462b172c82a1/volumes" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.292841 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675c0c62-9109-4128-93c9-801f66debbaf" path="/var/lib/kubelet/pods/675c0c62-9109-4128-93c9-801f66debbaf/volumes" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.293391 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" path="/var/lib/kubelet/pods/aea8e71a-36ca-4b96-8599-18a0b725e373/volumes" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.294542 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a67b56-220f-4248-8b7a-bacf2caa0400" path="/var/lib/kubelet/pods/c5a67b56-220f-4248-8b7a-bacf2caa0400/volumes" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.295284 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svjvm" event={"ID":"536f9813-b3c2-4be9-8e98-dcc68f2498a3","Type":"ContainerDied","Data":"553bc947f8c28e9501e3177eaa31f23ad6a0f3224a36effa5162689b32bc3275"} Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.295316 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vf8h2" event={"ID":"27a81382-d955-4658-8b95-0bbbaf788ecf","Type":"ContainerDied","Data":"fbca5dddce6c8b43cb40765fa90aa08591ed5594a539e0b56d3ad048e080f20d"} Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.295338 4959 scope.go:117] "RemoveContainer" containerID="0353947a203cfdb1cd62ef7e4a84f1ef241195c3232f86d58972e959a8c35cfd" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.317139 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svjvm"] Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.318680 4959 scope.go:117] "RemoveContainer" containerID="fff919f38a4d198a286622c238a805eb307e3ad8cf2c9696b9c508849e290d11" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.322429 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svjvm"] Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.326326 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vf8h2"] Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.329418 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vf8h2"] Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.348625 4959 scope.go:117] "RemoveContainer" containerID="1543bfb782100cbdb2d12508ceecf821e52d529e8197c21b6fb735a315881d89" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.361485 4959 scope.go:117] "RemoveContainer" containerID="1b4df3379700e3c0304f20c1a609022f43f618dbb073568f4323be5273489bea" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.383947 4959 scope.go:117] "RemoveContainer" containerID="078338b47992a74b0bdc3b98807134ddbd10a24977cf5267cf20d79c8dc1d701" Jan 21 13:12:41 crc kubenswrapper[4959]: I0121 13:12:41.401740 4959 scope.go:117] "RemoveContainer" containerID="813a79f95bc058c3fd2ea5ebd42119289c20fe5951a07031364df7c62a81f049" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.082748 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.303435 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" event={"ID":"e9823016-93fd-4112-b72f-f0b258e41caa","Type":"ContainerStarted","Data":"6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012"} Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.405576 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.505844 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkwkj\" (UniqueName: \"kubernetes.io/projected/fd12d219-aabd-430a-8567-e21c1674bbbf-kube-api-access-fkwkj\") pod \"fd12d219-aabd-430a-8567-e21c1674bbbf\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.505922 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-utilities\") pod \"fd12d219-aabd-430a-8567-e21c1674bbbf\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.505967 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-catalog-content\") pod \"fd12d219-aabd-430a-8567-e21c1674bbbf\" (UID: \"fd12d219-aabd-430a-8567-e21c1674bbbf\") " Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.506743 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-utilities" (OuterVolumeSpecName: "utilities") pod "fd12d219-aabd-430a-8567-e21c1674bbbf" (UID: "fd12d219-aabd-430a-8567-e21c1674bbbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.511300 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd12d219-aabd-430a-8567-e21c1674bbbf-kube-api-access-fkwkj" (OuterVolumeSpecName: "kube-api-access-fkwkj") pod "fd12d219-aabd-430a-8567-e21c1674bbbf" (UID: "fd12d219-aabd-430a-8567-e21c1674bbbf"). InnerVolumeSpecName "kube-api-access-fkwkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.607335 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkwkj\" (UniqueName: \"kubernetes.io/projected/fd12d219-aabd-430a-8567-e21c1674bbbf-kube-api-access-fkwkj\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.607433 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.655194 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd12d219-aabd-430a-8567-e21c1674bbbf" (UID: "fd12d219-aabd-430a-8567-e21c1674bbbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.715274 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd12d219-aabd-430a-8567-e21c1674bbbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.742820 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p"] Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743210 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675c0c62-9109-4128-93c9-801f66debbaf" containerName="oauth-openshift" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743239 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="675c0c62-9109-4128-93c9-801f66debbaf" containerName="oauth-openshift" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743256 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743264 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743275 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743282 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743291 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743296 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743306 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743313 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743321 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743327 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743338 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743344 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743356 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743364 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743379 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743385 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743394 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743401 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743412 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743421 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743432 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743440 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="extract-utilities" Jan 21 13:12:42 crc kubenswrapper[4959]: E0121 13:12:42.743447 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743456 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="extract-content" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743578 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea8e71a-36ca-4b96-8599-18a0b725e373" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743598 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743609 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="675c0c62-9109-4128-93c9-801f66debbaf" containerName="oauth-openshift" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743619 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.743632 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" containerName="registry-server" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.744198 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.750735 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-b77dbf775-b26rl"] Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.751896 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.750896 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.751067 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.751259 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.751359 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.751458 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.752031 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.757491 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.757887 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758029 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758200 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758067 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758340 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758077 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758414 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758167 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.758587 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.768298 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.769022 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b77dbf775-b26rl"] Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.772690 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.776476 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.776863 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.777121 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p"] Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.799811 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816132 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-config\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816176 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816217 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816307 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85g95\" (UniqueName: \"kubernetes.io/projected/490f3d4c-1a65-46ad-b126-8ebdf7586c34-kube-api-access-85g95\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816339 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-audit-policies\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816372 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-router-certs\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816404 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-client-ca\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816427 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdtn\" (UniqueName: \"kubernetes.io/projected/2196bbf0-ac43-48b3-9c43-b640605a0495-kube-api-access-9tdtn\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816450 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816477 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-service-ca\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816500 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816521 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-login\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816552 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-error\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816574 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2196bbf0-ac43-48b3-9c43-b640605a0495-audit-dir\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816595 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-session\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816614 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/490f3d4c-1a65-46ad-b126-8ebdf7586c34-serving-cert\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816640 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.816667 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917585 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917645 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917673 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-config\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917695 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917726 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917760 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85g95\" (UniqueName: \"kubernetes.io/projected/490f3d4c-1a65-46ad-b126-8ebdf7586c34-kube-api-access-85g95\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917795 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-audit-policies\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917815 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-router-certs\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917838 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-client-ca\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917860 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdtn\" (UniqueName: \"kubernetes.io/projected/2196bbf0-ac43-48b3-9c43-b640605a0495-kube-api-access-9tdtn\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917880 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917904 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-service-ca\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917929 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917952 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-login\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.917987 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-error\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.918008 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-session\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.918029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/490f3d4c-1a65-46ad-b126-8ebdf7586c34-serving-cert\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.918053 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2196bbf0-ac43-48b3-9c43-b640605a0495-audit-dir\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.918200 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2196bbf0-ac43-48b3-9c43-b640605a0495-audit-dir\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.918518 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.919028 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-service-ca\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.919162 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-client-ca\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.919931 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-config\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.920309 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-audit-policies\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.920774 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.922462 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-error\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.923552 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.924663 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.924746 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-login\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.925311 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/490f3d4c-1a65-46ad-b126-8ebdf7586c34-serving-cert\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.925364 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-session\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.925419 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.925732 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-system-router-certs\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.932148 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2196bbf0-ac43-48b3-9c43-b640605a0495-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.940279 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85g95\" (UniqueName: \"kubernetes.io/projected/490f3d4c-1a65-46ad-b126-8ebdf7586c34-kube-api-access-85g95\") pod \"route-controller-manager-77cbf8dbd6-2hv4p\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:42 crc kubenswrapper[4959]: I0121 13:12:42.940536 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdtn\" (UniqueName: \"kubernetes.io/projected/2196bbf0-ac43-48b3-9c43-b640605a0495-kube-api-access-9tdtn\") pod \"oauth-openshift-b77dbf775-b26rl\" (UID: \"2196bbf0-ac43-48b3-9c43-b640605a0495\") " pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.064497 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.075417 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.293986 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a81382-d955-4658-8b95-0bbbaf788ecf" path="/var/lib/kubelet/pods/27a81382-d955-4658-8b95-0bbbaf788ecf/volumes" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.295270 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536f9813-b3c2-4be9-8e98-dcc68f2498a3" path="/var/lib/kubelet/pods/536f9813-b3c2-4be9-8e98-dcc68f2498a3/volumes" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.318248 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7xxn" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.318271 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7xxn" event={"ID":"fd12d219-aabd-430a-8567-e21c1674bbbf","Type":"ContainerDied","Data":"f95bdfefb7b26c80ec5aba3b28e21796b652710ff882721c63720450d7ed8389"} Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.318329 4959 scope.go:117] "RemoveContainer" containerID="076b58a7f5470a22f46ace80bfa46d0ceaa374144f7f9bf03bb8f8224860e072" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.318661 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.324654 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.340842 4959 scope.go:117] "RemoveContainer" containerID="078f963481cd7b9e5139ad35aaaf59f8d7d5fc54b8f2aaa1530c569cbcfc946d" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.341270 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" podStartSLOduration=9.34125661 podStartE2EDuration="9.34125661s" podCreationTimestamp="2026-01-21 13:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:43.339538774 +0000 UTC m=+224.302569317" watchObservedRunningTime="2026-01-21 13:12:43.34125661 +0000 UTC m=+224.304287153" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.351081 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b77dbf775-b26rl"] Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.353963 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7xxn"] Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.356661 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n7xxn"] Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.402399 4959 scope.go:117] "RemoveContainer" containerID="600d243c822fef9a85967245f15ffcb801cf1557246d65197ab895d97be2f732" Jan 21 13:12:43 crc kubenswrapper[4959]: I0121 13:12:43.484169 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p"] Jan 21 13:12:43 crc kubenswrapper[4959]: W0121 13:12:43.490449 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490f3d4c_1a65_46ad_b126_8ebdf7586c34.slice/crio-8f01669eebbfe1be600e28a7754fa1cb3b092c4dd80022308d7aeedce34a2e42 WatchSource:0}: Error finding container 8f01669eebbfe1be600e28a7754fa1cb3b092c4dd80022308d7aeedce34a2e42: Status 404 returned error can't find the container with id 8f01669eebbfe1be600e28a7754fa1cb3b092c4dd80022308d7aeedce34a2e42 Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.325461 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" event={"ID":"2196bbf0-ac43-48b3-9c43-b640605a0495","Type":"ContainerStarted","Data":"328e9a9cf2d0719fd6728c5fd75012eff29415a0d35b0193a7b4b6e3c57715c1"} Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.325763 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" event={"ID":"2196bbf0-ac43-48b3-9c43-b640605a0495","Type":"ContainerStarted","Data":"9c38b39d069b371fe2cc1715975b66757d25101e486ed28cc65149c1ce4a7c99"} Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.326119 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.328047 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" event={"ID":"490f3d4c-1a65-46ad-b126-8ebdf7586c34","Type":"ContainerStarted","Data":"2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd"} Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.328077 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" event={"ID":"490f3d4c-1a65-46ad-b126-8ebdf7586c34","Type":"ContainerStarted","Data":"8f01669eebbfe1be600e28a7754fa1cb3b092c4dd80022308d7aeedce34a2e42"} Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.328316 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.333722 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.335723 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.347771 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-b77dbf775-b26rl" podStartSLOduration=35.347751788 podStartE2EDuration="35.347751788s" podCreationTimestamp="2026-01-21 13:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:44.345455486 +0000 UTC m=+225.308486039" watchObservedRunningTime="2026-01-21 13:12:44.347751788 +0000 UTC m=+225.310782331" Jan 21 13:12:44 crc kubenswrapper[4959]: I0121 13:12:44.368963 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" podStartSLOduration=10.368943514 podStartE2EDuration="10.368943514s" podCreationTimestamp="2026-01-21 13:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:12:44.368314367 +0000 UTC m=+225.331344910" watchObservedRunningTime="2026-01-21 13:12:44.368943514 +0000 UTC m=+225.331974057" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.265198 4959 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266250 4959 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266637 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5" gracePeriod=15 Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266707 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266730 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552" gracePeriod=15 Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266730 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5" gracePeriod=15 Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266808 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed" gracePeriod=15 Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.266818 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad" gracePeriod=15 Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.269995 4959 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270747 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270773 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270793 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270807 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270825 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270838 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270852 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270862 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270888 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270899 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270916 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270927 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.270950 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.270961 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.271360 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.271388 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.271402 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.271418 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.271435 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.271449 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.295158 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd12d219-aabd-430a-8567-e21c1674bbbf" path="/var/lib/kubelet/pods/fd12d219-aabd-430a-8567-e21c1674bbbf/volumes" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.307877 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348165 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348231 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348270 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348299 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348366 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348492 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348538 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.348560 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.449683 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450034 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450084 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450139 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450189 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450232 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.449807 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450293 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450382 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450569 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450841 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450877 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.450929 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.451366 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.451768 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.451938 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: I0121 13:12:45.604615 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:12:45 crc kubenswrapper[4959]: W0121 13:12:45.625433 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-7c9f23094ad83464db792d695916c3398292614f30c9ae176d4cc0fd9efc66a5 WatchSource:0}: Error finding container 7c9f23094ad83464db792d695916c3398292614f30c9ae176d4cc0fd9efc66a5: Status 404 returned error can't find the container with id 7c9f23094ad83464db792d695916c3398292614f30c9ae176d4cc0fd9efc66a5 Jan 21 13:12:45 crc kubenswrapper[4959]: E0121 13:12:45.628029 4959 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc13004792469 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:12:45.627401321 +0000 UTC m=+226.590431864,LastTimestamp:2026-01-21 13:12:45.627401321 +0000 UTC m=+226.590431864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.340366 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa"} Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.340427 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7c9f23094ad83464db792d695916c3398292614f30c9ae176d4cc0fd9efc66a5"} Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.341234 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.343167 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.344690 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.345295 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed" exitCode=0 Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.345318 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5" exitCode=0 Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.345327 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552" exitCode=0 Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.345335 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad" exitCode=2 Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.345324 4959 scope.go:117] "RemoveContainer" containerID="337ecbcd276cb8d84b33e5ee9b5b7f708f92442e898a3acaddb130d14b825c7f" Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.347441 4959 generic.go:334] "Generic (PLEG): container finished" podID="3529cbe7-e333-4087-8d38-b1bea342f086" containerID="a6276b3211d6205f2604b90c0ac65b3d87287874e25b9f1e5e4bf151b94fd560" exitCode=0 Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.347505 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3529cbe7-e333-4087-8d38-b1bea342f086","Type":"ContainerDied","Data":"a6276b3211d6205f2604b90c0ac65b3d87287874e25b9f1e5e4bf151b94fd560"} Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.348773 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:46 crc kubenswrapper[4959]: I0121 13:12:46.349014 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.356642 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.666990 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.668320 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.669291 4959 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.670259 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.670509 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.774006 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.774947 4959 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.775482 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.776036 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799373 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799441 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799465 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799507 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799529 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799622 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799955 4959 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799979 4959 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.799987 4959 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901292 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-kubelet-dir\") pod \"3529cbe7-e333-4087-8d38-b1bea342f086\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901371 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3529cbe7-e333-4087-8d38-b1bea342f086" (UID: "3529cbe7-e333-4087-8d38-b1bea342f086"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901411 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-var-lock\") pod \"3529cbe7-e333-4087-8d38-b1bea342f086\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901456 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3529cbe7-e333-4087-8d38-b1bea342f086-kube-api-access\") pod \"3529cbe7-e333-4087-8d38-b1bea342f086\" (UID: \"3529cbe7-e333-4087-8d38-b1bea342f086\") " Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901578 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-var-lock" (OuterVolumeSpecName: "var-lock") pod "3529cbe7-e333-4087-8d38-b1bea342f086" (UID: "3529cbe7-e333-4087-8d38-b1bea342f086"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901923 4959 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.901947 4959 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3529cbe7-e333-4087-8d38-b1bea342f086-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:47 crc kubenswrapper[4959]: I0121 13:12:47.907132 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3529cbe7-e333-4087-8d38-b1bea342f086-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3529cbe7-e333-4087-8d38-b1bea342f086" (UID: "3529cbe7-e333-4087-8d38-b1bea342f086"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.003682 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3529cbe7-e333-4087-8d38-b1bea342f086-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.369405 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.370266 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5" exitCode=0 Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.370317 4959 scope.go:117] "RemoveContainer" containerID="b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.370457 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.373309 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3529cbe7-e333-4087-8d38-b1bea342f086","Type":"ContainerDied","Data":"ca70974464cb92a6cecadfcc3568501fa397e92dc1fe16935b63204f5525377c"} Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.373338 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca70974464cb92a6cecadfcc3568501fa397e92dc1fe16935b63204f5525377c" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.373373 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.389904 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.390613 4959 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.390827 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.391105 4959 scope.go:117] "RemoveContainer" containerID="19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.404615 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.405031 4959 scope.go:117] "RemoveContainer" containerID="5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.405040 4959 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.405510 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.426081 4959 scope.go:117] "RemoveContainer" containerID="d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.442329 4959 scope.go:117] "RemoveContainer" containerID="1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.461466 4959 scope.go:117] "RemoveContainer" containerID="e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.482124 4959 scope.go:117] "RemoveContainer" containerID="b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed" Jan 21 13:12:48 crc kubenswrapper[4959]: E0121 13:12:48.482549 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\": container with ID starting with b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed not found: ID does not exist" containerID="b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.482589 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed"} err="failed to get container status \"b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\": rpc error: code = NotFound desc = could not find container \"b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed\": container with ID starting with b88791f3514d6c7f3d029471bc6434ba51beb8b557c12a9f2afb140ecb384eed not found: ID does not exist" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.482619 4959 scope.go:117] "RemoveContainer" containerID="19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5" Jan 21 13:12:48 crc kubenswrapper[4959]: E0121 13:12:48.483125 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\": container with ID starting with 19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5 not found: ID does not exist" containerID="19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.483171 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5"} err="failed to get container status \"19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\": rpc error: code = NotFound desc = could not find container \"19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5\": container with ID starting with 19c2bea25ed921a55a9d5fddf7f259ac251e78555606fb4574a50bd6eaaeebc5 not found: ID does not exist" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.483203 4959 scope.go:117] "RemoveContainer" containerID="5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552" Jan 21 13:12:48 crc kubenswrapper[4959]: E0121 13:12:48.484394 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\": container with ID starting with 5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552 not found: ID does not exist" containerID="5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.484412 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552"} err="failed to get container status \"5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\": rpc error: code = NotFound desc = could not find container \"5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552\": container with ID starting with 5bf6810c2a6873e38a5894532a5cb486615aa1e499082a4b903ccace42908552 not found: ID does not exist" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.484431 4959 scope.go:117] "RemoveContainer" containerID="d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad" Jan 21 13:12:48 crc kubenswrapper[4959]: E0121 13:12:48.484825 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\": container with ID starting with d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad not found: ID does not exist" containerID="d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.484848 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad"} err="failed to get container status \"d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\": rpc error: code = NotFound desc = could not find container \"d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad\": container with ID starting with d9be0e305393a8f9fefb3c51d9763dd69ed7040412e9558f1fbd419147b193ad not found: ID does not exist" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.484865 4959 scope.go:117] "RemoveContainer" containerID="1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5" Jan 21 13:12:48 crc kubenswrapper[4959]: E0121 13:12:48.485116 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\": container with ID starting with 1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5 not found: ID does not exist" containerID="1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.485139 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5"} err="failed to get container status \"1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\": rpc error: code = NotFound desc = could not find container \"1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5\": container with ID starting with 1efc1bdc120cc169c3521c421991aabcc26207d39e5b87bf2ce9ebd588d2bbe5 not found: ID does not exist" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.485155 4959 scope.go:117] "RemoveContainer" containerID="e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8" Jan 21 13:12:48 crc kubenswrapper[4959]: E0121 13:12:48.485546 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\": container with ID starting with e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8 not found: ID does not exist" containerID="e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8" Jan 21 13:12:48 crc kubenswrapper[4959]: I0121 13:12:48.485582 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8"} err="failed to get container status \"e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\": rpc error: code = NotFound desc = could not find container \"e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8\": container with ID starting with e4b211d32d21267aaa7067e9a4e88958fcef41154fd6a3ba89d662d335064bf8 not found: ID does not exist" Jan 21 13:12:49 crc kubenswrapper[4959]: I0121 13:12:49.291146 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:49 crc kubenswrapper[4959]: I0121 13:12:49.291772 4959 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:49 crc kubenswrapper[4959]: I0121 13:12:49.292448 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:49 crc kubenswrapper[4959]: I0121 13:12:49.297506 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 13:12:52 crc kubenswrapper[4959]: E0121 13:12:52.563515 4959 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.220:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc13004792469 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:12:45.627401321 +0000 UTC m=+226.590431864,LastTimestamp:2026-01-21 13:12:45.627401321 +0000 UTC m=+226.590431864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.163777 4959 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.164755 4959 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.165327 4959 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.165775 4959 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.166118 4959 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:54 crc kubenswrapper[4959]: I0121 13:12:54.166158 4959 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.166420 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="200ms" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.367468 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="400ms" Jan 21 13:12:54 crc kubenswrapper[4959]: E0121 13:12:54.768653 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="800ms" Jan 21 13:12:55 crc kubenswrapper[4959]: E0121 13:12:55.569413 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="1.6s" Jan 21 13:12:57 crc kubenswrapper[4959]: E0121 13:12:57.170969 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="3.2s" Jan 21 13:12:59 crc kubenswrapper[4959]: I0121 13:12:59.288878 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:12:59 crc kubenswrapper[4959]: I0121 13:12:59.290448 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.285264 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.286790 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.287410 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.307433 4959 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.307510 4959 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:00 crc kubenswrapper[4959]: E0121 13:13:00.308262 4959 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.308937 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:00 crc kubenswrapper[4959]: W0121 13:13:00.339231 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-c0d0935a3dab22c54fac1ed3e51879e1226982e064ea97e967e9607dc229b969 WatchSource:0}: Error finding container c0d0935a3dab22c54fac1ed3e51879e1226982e064ea97e967e9607dc229b969: Status 404 returned error can't find the container with id c0d0935a3dab22c54fac1ed3e51879e1226982e064ea97e967e9607dc229b969 Jan 21 13:13:00 crc kubenswrapper[4959]: E0121 13:13:00.372366 4959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.220:6443: connect: connection refused" interval="6.4s" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.454115 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.454179 4959 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7" exitCode=1 Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.454249 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7"} Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.454721 4959 scope.go:117] "RemoveContainer" containerID="213cab4d17138473dfa0909b41246e946df3d87734fea079fe17792c0c0dd5a7" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.455563 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.455978 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.456181 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c0d0935a3dab22c54fac1ed3e51879e1226982e064ea97e967e9607dc229b969"} Jan 21 13:13:00 crc kubenswrapper[4959]: I0121 13:13:00.456577 4959 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.468249 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.468737 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0bc21be8aaeb435d087b6167cc3e239f6db9c4ca76b929d0f1edd486bbc63b0c"} Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.470072 4959 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.470704 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.471327 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.472348 4959 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0022854599b7159a59cc79a71d287d75f78873a47215a758a8d4951648c942b3" exitCode=0 Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.472438 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0022854599b7159a59cc79a71d287d75f78873a47215a758a8d4951648c942b3"} Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.472881 4959 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.472923 4959 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.473336 4959 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:01 crc kubenswrapper[4959]: E0121 13:13:01.473484 4959 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.473938 4959 status_manager.go:851] "Failed to get status for pod" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:01 crc kubenswrapper[4959]: I0121 13:13:01.474481 4959 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.220:6443: connect: connection refused" Jan 21 13:13:02 crc kubenswrapper[4959]: I0121 13:13:02.483141 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"586564271b28ae667300a18e28be40818471da7c83ecb61837dd57f5d81ee38b"} Jan 21 13:13:02 crc kubenswrapper[4959]: I0121 13:13:02.483429 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbf1aa0d66ac11c35eca7e4f0455e0040f291af8e462f8b9ff76c1920639aa0d"} Jan 21 13:13:02 crc kubenswrapper[4959]: I0121 13:13:02.483439 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"abe12a7f74c7a77fff38ea846c05df39919439ec3a6f71270a034eee3617aa22"} Jan 21 13:13:02 crc kubenswrapper[4959]: I0121 13:13:02.483450 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8277a8b60ff7334bf5ac88a2070693809918140da93f71132586cff60a5d2841"} Jan 21 13:13:03 crc kubenswrapper[4959]: I0121 13:13:03.490954 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c45e8cfd99a5dbebc61881c889fdd15ca9618bcb33ead3143b17909ad5b9ba8"} Jan 21 13:13:03 crc kubenswrapper[4959]: I0121 13:13:03.491137 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:03 crc kubenswrapper[4959]: I0121 13:13:03.491243 4959 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:03 crc kubenswrapper[4959]: I0121 13:13:03.491261 4959 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:04 crc kubenswrapper[4959]: I0121 13:13:04.003416 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:13:05 crc kubenswrapper[4959]: I0121 13:13:05.309489 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:05 crc kubenswrapper[4959]: I0121 13:13:05.309561 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:05 crc kubenswrapper[4959]: I0121 13:13:05.315068 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:05 crc kubenswrapper[4959]: I0121 13:13:05.680391 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:13:05 crc kubenswrapper[4959]: I0121 13:13:05.687090 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:13:08 crc kubenswrapper[4959]: I0121 13:13:08.500551 4959 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:09 crc kubenswrapper[4959]: I0121 13:13:09.303392 4959 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c73462c6-a5d3-4b47-992f-c849cc1070cc" Jan 21 13:13:09 crc kubenswrapper[4959]: I0121 13:13:09.525398 4959 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:09 crc kubenswrapper[4959]: I0121 13:13:09.525481 4959 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:09 crc kubenswrapper[4959]: I0121 13:13:09.528329 4959 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c73462c6-a5d3-4b47-992f-c849cc1070cc" Jan 21 13:13:09 crc kubenswrapper[4959]: I0121 13:13:09.530545 4959 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://8277a8b60ff7334bf5ac88a2070693809918140da93f71132586cff60a5d2841" Jan 21 13:13:09 crc kubenswrapper[4959]: I0121 13:13:09.530571 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:10 crc kubenswrapper[4959]: I0121 13:13:10.532270 4959 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:10 crc kubenswrapper[4959]: I0121 13:13:10.533883 4959 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08af37e1-90cb-4397-ab98-608ede176954" Jan 21 13:13:10 crc kubenswrapper[4959]: I0121 13:13:10.535655 4959 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c73462c6-a5d3-4b47-992f-c849cc1070cc" Jan 21 13:13:14 crc kubenswrapper[4959]: I0121 13:13:14.007151 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:13:17 crc kubenswrapper[4959]: I0121 13:13:17.670589 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 13:13:18 crc kubenswrapper[4959]: I0121 13:13:18.162670 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 13:13:18 crc kubenswrapper[4959]: I0121 13:13:18.507091 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 13:13:18 crc kubenswrapper[4959]: I0121 13:13:18.762578 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 13:13:19 crc kubenswrapper[4959]: I0121 13:13:19.086923 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 13:13:19 crc kubenswrapper[4959]: I0121 13:13:19.104956 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 13:13:19 crc kubenswrapper[4959]: I0121 13:13:19.483722 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 13:13:19 crc kubenswrapper[4959]: I0121 13:13:19.689255 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.020955 4959 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.039637 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.241552 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.615158 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.715357 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.753831 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.821844 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.886307 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.960693 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 13:13:20 crc kubenswrapper[4959]: I0121 13:13:20.975889 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.009069 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.016281 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.044058 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.109162 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.175310 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.244369 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.378077 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.419624 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.486317 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.564149 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.600490 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.653777 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.826044 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.832816 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.848992 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.855768 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.870189 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.880609 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 13:13:21 crc kubenswrapper[4959]: I0121 13:13:21.903584 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.037025 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.037398 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.080190 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.266602 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.301692 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.375685 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.380541 4959 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.381721 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.437367 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.617802 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.675671 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.802413 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.816755 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.833306 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.904014 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 13:13:22 crc kubenswrapper[4959]: I0121 13:13:22.915573 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.025055 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.051475 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.111801 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.194056 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.256635 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.326807 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.339436 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.345651 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.355661 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.441348 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.541735 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.638037 4959 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.639721 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.654281 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.836472 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.860449 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.866122 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.914566 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 13:13:23 crc kubenswrapper[4959]: I0121 13:13:23.988407 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.024576 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.033481 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.068258 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.181578 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.386605 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.391799 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.403910 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.435234 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.516603 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.536648 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.704392 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.759058 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:13:24 crc kubenswrapper[4959]: I0121 13:13:24.959723 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.034496 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.078902 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.106051 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.145175 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.176509 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.292381 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.302712 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.366668 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.393576 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.517807 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.523692 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.555590 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.839507 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.854602 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.903713 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 13:13:25 crc kubenswrapper[4959]: I0121 13:13:25.970027 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.087473 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.092847 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.094522 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.097291 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.157631 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.167891 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.181260 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.268437 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.397434 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.468973 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.509666 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.539314 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.557590 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.593908 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.617486 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.633581 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.663141 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.694309 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.810085 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.822189 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.938888 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.969510 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 13:13:26 crc kubenswrapper[4959]: I0121 13:13:26.983144 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.047761 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.053786 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.055125 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.127190 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.230333 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.242322 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.269266 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.289576 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.294280 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.311987 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.321815 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.330083 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.338607 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.376366 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.402152 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.430422 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.513172 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.527710 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.534580 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.600682 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.755318 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.766465 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.818645 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.829889 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.877767 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 13:13:27 crc kubenswrapper[4959]: I0121 13:13:27.957154 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.046157 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.058035 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.092619 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.121989 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.165596 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.197158 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.230616 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.330602 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.348902 4959 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.537807 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.557282 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.574393 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.577529 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.582154 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.586134 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.607969 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.623867 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.629309 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.701890 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.722408 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.747819 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.767501 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.791957 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.838131 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.847471 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 13:13:28 crc kubenswrapper[4959]: I0121 13:13:28.861217 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.019167 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.041673 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.049306 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.197225 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.203700 4959 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.206782 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.206765888 podStartE2EDuration="44.206765888s" podCreationTimestamp="2026-01-21 13:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:13:08.514341436 +0000 UTC m=+249.477371989" watchObservedRunningTime="2026-01-21 13:13:29.206765888 +0000 UTC m=+270.169796431" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.208401 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.208442 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.213808 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.231307 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.231281668 podStartE2EDuration="21.231281668s" podCreationTimestamp="2026-01-21 13:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:13:29.230294843 +0000 UTC m=+270.193325406" watchObservedRunningTime="2026-01-21 13:13:29.231281668 +0000 UTC m=+270.194312221" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.408411 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.418950 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.426044 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.542952 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.684775 4959 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.699379 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.807378 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 13:13:29 crc kubenswrapper[4959]: I0121 13:13:29.830483 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.043211 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.099825 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.196948 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.202322 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.357820 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.439840 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.503895 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.504812 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.554022 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.596152 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.711812 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.755859 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.821977 4959 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.822207 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa" gracePeriod=5 Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.876809 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 13:13:30 crc kubenswrapper[4959]: I0121 13:13:30.987599 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.026332 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.177491 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.187266 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.269560 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.536276 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.623490 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.724587 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.726226 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.744894 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.797552 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.886268 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 13:13:31 crc kubenswrapper[4959]: I0121 13:13:31.993000 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.113397 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.154929 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.242520 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.261395 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.286937 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.314177 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.351225 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.411710 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.480696 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.680654 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.941161 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.992775 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 13:13:32 crc kubenswrapper[4959]: I0121 13:13:32.997120 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.012683 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.034522 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.323680 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.370960 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.455184 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.510862 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.597167 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.656644 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 13:13:33 crc kubenswrapper[4959]: I0121 13:13:33.660555 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.334065 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.664061 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7675bf54db-ftnb8"] Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.664318 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" podUID="e9823016-93fd-4112-b72f-f0b258e41caa" containerName="controller-manager" containerID="cri-o://6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012" gracePeriod=30 Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.667613 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p"] Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.667841 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" podUID="490f3d4c-1a65-46ad-b126-8ebdf7586c34" containerName="route-controller-manager" containerID="cri-o://2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd" gracePeriod=30 Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.733857 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 13:13:34 crc kubenswrapper[4959]: I0121 13:13:34.822108 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.208259 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.229221 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.291124 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399038 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-config\") pod \"e9823016-93fd-4112-b72f-f0b258e41caa\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399083 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-client-ca\") pod \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399136 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9823016-93fd-4112-b72f-f0b258e41caa-serving-cert\") pod \"e9823016-93fd-4112-b72f-f0b258e41caa\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399186 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85g95\" (UniqueName: \"kubernetes.io/projected/490f3d4c-1a65-46ad-b126-8ebdf7586c34-kube-api-access-85g95\") pod \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399202 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-client-ca\") pod \"e9823016-93fd-4112-b72f-f0b258e41caa\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399217 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/490f3d4c-1a65-46ad-b126-8ebdf7586c34-serving-cert\") pod \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399237 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-config\") pod \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\" (UID: \"490f3d4c-1a65-46ad-b126-8ebdf7586c34\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399254 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmlj\" (UniqueName: \"kubernetes.io/projected/e9823016-93fd-4112-b72f-f0b258e41caa-kube-api-access-dlmlj\") pod \"e9823016-93fd-4112-b72f-f0b258e41caa\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.399268 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-proxy-ca-bundles\") pod \"e9823016-93fd-4112-b72f-f0b258e41caa\" (UID: \"e9823016-93fd-4112-b72f-f0b258e41caa\") " Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.400303 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e9823016-93fd-4112-b72f-f0b258e41caa" (UID: "e9823016-93fd-4112-b72f-f0b258e41caa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.400422 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-config" (OuterVolumeSpecName: "config") pod "490f3d4c-1a65-46ad-b126-8ebdf7586c34" (UID: "490f3d4c-1a65-46ad-b126-8ebdf7586c34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.400654 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-client-ca" (OuterVolumeSpecName: "client-ca") pod "490f3d4c-1a65-46ad-b126-8ebdf7586c34" (UID: "490f3d4c-1a65-46ad-b126-8ebdf7586c34"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.400651 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-client-ca" (OuterVolumeSpecName: "client-ca") pod "e9823016-93fd-4112-b72f-f0b258e41caa" (UID: "e9823016-93fd-4112-b72f-f0b258e41caa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.401208 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-config" (OuterVolumeSpecName: "config") pod "e9823016-93fd-4112-b72f-f0b258e41caa" (UID: "e9823016-93fd-4112-b72f-f0b258e41caa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.406304 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490f3d4c-1a65-46ad-b126-8ebdf7586c34-kube-api-access-85g95" (OuterVolumeSpecName: "kube-api-access-85g95") pod "490f3d4c-1a65-46ad-b126-8ebdf7586c34" (UID: "490f3d4c-1a65-46ad-b126-8ebdf7586c34"). InnerVolumeSpecName "kube-api-access-85g95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.406386 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490f3d4c-1a65-46ad-b126-8ebdf7586c34-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "490f3d4c-1a65-46ad-b126-8ebdf7586c34" (UID: "490f3d4c-1a65-46ad-b126-8ebdf7586c34"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.407547 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9823016-93fd-4112-b72f-f0b258e41caa-kube-api-access-dlmlj" (OuterVolumeSpecName: "kube-api-access-dlmlj") pod "e9823016-93fd-4112-b72f-f0b258e41caa" (UID: "e9823016-93fd-4112-b72f-f0b258e41caa"). InnerVolumeSpecName "kube-api-access-dlmlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.408418 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9823016-93fd-4112-b72f-f0b258e41caa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e9823016-93fd-4112-b72f-f0b258e41caa" (UID: "e9823016-93fd-4112-b72f-f0b258e41caa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500376 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500410 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmlj\" (UniqueName: \"kubernetes.io/projected/e9823016-93fd-4112-b72f-f0b258e41caa-kube-api-access-dlmlj\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500423 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500433 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500441 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/490f3d4c-1a65-46ad-b126-8ebdf7586c34-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500450 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9823016-93fd-4112-b72f-f0b258e41caa-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500459 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85g95\" (UniqueName: \"kubernetes.io/projected/490f3d4c-1a65-46ad-b126-8ebdf7586c34-kube-api-access-85g95\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500468 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9823016-93fd-4112-b72f-f0b258e41caa-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.500477 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/490f3d4c-1a65-46ad-b126-8ebdf7586c34-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.664159 4959 generic.go:334] "Generic (PLEG): container finished" podID="e9823016-93fd-4112-b72f-f0b258e41caa" containerID="6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012" exitCode=0 Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.664236 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" event={"ID":"e9823016-93fd-4112-b72f-f0b258e41caa","Type":"ContainerDied","Data":"6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012"} Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.664236 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.664268 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675bf54db-ftnb8" event={"ID":"e9823016-93fd-4112-b72f-f0b258e41caa","Type":"ContainerDied","Data":"b93509ad572c6caaf82b014c4d0b317fe7a6fe48132700d72770bc687bb2c3c5"} Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.664288 4959 scope.go:117] "RemoveContainer" containerID="6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.665983 4959 generic.go:334] "Generic (PLEG): container finished" podID="490f3d4c-1a65-46ad-b126-8ebdf7586c34" containerID="2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd" exitCode=0 Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.666004 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" event={"ID":"490f3d4c-1a65-46ad-b126-8ebdf7586c34","Type":"ContainerDied","Data":"2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd"} Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.666021 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" event={"ID":"490f3d4c-1a65-46ad-b126-8ebdf7586c34","Type":"ContainerDied","Data":"8f01669eebbfe1be600e28a7754fa1cb3b092c4dd80022308d7aeedce34a2e42"} Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.666069 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.691986 4959 scope.go:117] "RemoveContainer" containerID="6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012" Jan 21 13:13:35 crc kubenswrapper[4959]: E0121 13:13:35.692950 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012\": container with ID starting with 6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012 not found: ID does not exist" containerID="6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.693140 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012"} err="failed to get container status \"6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012\": rpc error: code = NotFound desc = could not find container \"6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012\": container with ID starting with 6613e90e450f3f6ab36f64168901d5258048d6c3c0602caa5caa27f1c5f14012 not found: ID does not exist" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.693266 4959 scope.go:117] "RemoveContainer" containerID="2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.709201 4959 scope.go:117] "RemoveContainer" containerID="2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd" Jan 21 13:13:35 crc kubenswrapper[4959]: E0121 13:13:35.709686 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd\": container with ID starting with 2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd not found: ID does not exist" containerID="2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.709736 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd"} err="failed to get container status \"2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd\": rpc error: code = NotFound desc = could not find container \"2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd\": container with ID starting with 2671c26767fc5ec0b00dcd21a5e6152d948f7706efd25a149f9f0313dfee56cd not found: ID does not exist" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.711394 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7675bf54db-ftnb8"] Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.716129 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7675bf54db-ftnb8"] Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.721478 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p"] Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.725873 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77cbf8dbd6-2hv4p"] Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.780691 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8"] Jan 21 13:13:35 crc kubenswrapper[4959]: E0121 13:13:35.780983 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9823016-93fd-4112-b72f-f0b258e41caa" containerName="controller-manager" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.781013 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9823016-93fd-4112-b72f-f0b258e41caa" containerName="controller-manager" Jan 21 13:13:35 crc kubenswrapper[4959]: E0121 13:13:35.787970 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" containerName="installer" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788006 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" containerName="installer" Jan 21 13:13:35 crc kubenswrapper[4959]: E0121 13:13:35.788034 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788046 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 13:13:35 crc kubenswrapper[4959]: E0121 13:13:35.788065 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f3d4c-1a65-46ad-b126-8ebdf7586c34" containerName="route-controller-manager" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788076 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f3d4c-1a65-46ad-b126-8ebdf7586c34" containerName="route-controller-manager" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788302 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788320 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="490f3d4c-1a65-46ad-b126-8ebdf7586c34" containerName="route-controller-manager" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788335 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3529cbe7-e333-4087-8d38-b1bea342f086" containerName="installer" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788346 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9823016-93fd-4112-b72f-f0b258e41caa" containerName="controller-manager" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788860 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8"] Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.788963 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.790514 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.793649 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.793814 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.793926 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.804319 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-proxy-ca-bundles\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.804385 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5k5n\" (UniqueName: \"kubernetes.io/projected/7473609d-30e6-4896-bdd5-14b38fd71598-kube-api-access-f5k5n\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.804424 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7473609d-30e6-4896-bdd5-14b38fd71598-serving-cert\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.804455 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-client-ca\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.804527 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-config\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.805989 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.806230 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.810249 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.905742 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7473609d-30e6-4896-bdd5-14b38fd71598-serving-cert\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.905805 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-client-ca\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.905842 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-config\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.905879 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-proxy-ca-bundles\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.905927 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5k5n\" (UniqueName: \"kubernetes.io/projected/7473609d-30e6-4896-bdd5-14b38fd71598-kube-api-access-f5k5n\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.907279 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-client-ca\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.907736 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-config\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.908938 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-proxy-ca-bundles\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.910439 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7473609d-30e6-4896-bdd5-14b38fd71598-serving-cert\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:35 crc kubenswrapper[4959]: I0121 13:13:35.926961 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5k5n\" (UniqueName: \"kubernetes.io/projected/7473609d-30e6-4896-bdd5-14b38fd71598-kube-api-access-f5k5n\") pod \"controller-manager-c4ff7d7c6-wt6k8\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.130244 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.307361 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8"] Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.373668 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.373751 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.512903 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.512953 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513027 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513063 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513156 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513896 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513929 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513947 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.513963 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.518873 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.614658 4959 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.614692 4959 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.614701 4959 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.614709 4959 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.614718 4959 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.671589 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.671639 4959 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa" exitCode=137 Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.671705 4959 scope.go:117] "RemoveContainer" containerID="0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.671798 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.682467 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" event={"ID":"7473609d-30e6-4896-bdd5-14b38fd71598","Type":"ContainerStarted","Data":"af3fc721621c9ed0dadf84e86dd14ccc21dc3356daa0d346a532d5be5c510fa6"} Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.682512 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" event={"ID":"7473609d-30e6-4896-bdd5-14b38fd71598","Type":"ContainerStarted","Data":"05238892f73d85b7e32ab481924ceab72a67907f3389dfaab9e3b39849f5e622"} Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.684665 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.691881 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.704051 4959 scope.go:117] "RemoveContainer" containerID="0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa" Jan 21 13:13:36 crc kubenswrapper[4959]: E0121 13:13:36.705174 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa\": container with ID starting with 0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa not found: ID does not exist" containerID="0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.705236 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa"} err="failed to get container status \"0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa\": rpc error: code = NotFound desc = could not find container \"0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa\": container with ID starting with 0de48a76e185f2980fd533883ba58975b760fbcc486359a4974bb5e2b2923caa not found: ID does not exist" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.733634 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" podStartSLOduration=2.733613314 podStartE2EDuration="2.733613314s" podCreationTimestamp="2026-01-21 13:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:13:36.714801638 +0000 UTC m=+277.677832181" watchObservedRunningTime="2026-01-21 13:13:36.733613314 +0000 UTC m=+277.696643857" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.777468 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl"] Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.778303 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.780370 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.780546 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.780686 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.780985 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.781257 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.781260 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.790045 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl"] Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.817236 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd8a97-0ce5-47f1-baad-32a61d41ee41-serving-cert\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.817346 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmph\" (UniqueName: \"kubernetes.io/projected/59fd8a97-0ce5-47f1-baad-32a61d41ee41-kube-api-access-xcmph\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.817374 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-client-ca\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.817392 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-config\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.917970 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-config\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.918056 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd8a97-0ce5-47f1-baad-32a61d41ee41-serving-cert\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.918149 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcmph\" (UniqueName: \"kubernetes.io/projected/59fd8a97-0ce5-47f1-baad-32a61d41ee41-kube-api-access-xcmph\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.918176 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-client-ca\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.919112 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-client-ca\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.920381 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-config\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.922489 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd8a97-0ce5-47f1-baad-32a61d41ee41-serving-cert\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:36 crc kubenswrapper[4959]: I0121 13:13:36.940338 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcmph\" (UniqueName: \"kubernetes.io/projected/59fd8a97-0ce5-47f1-baad-32a61d41ee41-kube-api-access-xcmph\") pod \"route-controller-manager-798f7896f6-9szzl\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.114600 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.294966 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="490f3d4c-1a65-46ad-b126-8ebdf7586c34" path="/var/lib/kubelet/pods/490f3d4c-1a65-46ad-b126-8ebdf7586c34/volumes" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.297275 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9823016-93fd-4112-b72f-f0b258e41caa" path="/var/lib/kubelet/pods/e9823016-93fd-4112-b72f-f0b258e41caa/volumes" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.297724 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.298045 4959 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.306169 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.306244 4959 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="da70bcd3-8666-4382-be8c-1cccd17c465e" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.309935 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.309980 4959 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="da70bcd3-8666-4382-be8c-1cccd17c465e" Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.545526 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl"] Jan 21 13:13:37 crc kubenswrapper[4959]: I0121 13:13:37.695518 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" event={"ID":"59fd8a97-0ce5-47f1-baad-32a61d41ee41","Type":"ContainerStarted","Data":"5b23a4b40877ad9eaa04606a8cb0bc52c9959568ee51ae1c4049a23dc9e68ab7"} Jan 21 13:13:38 crc kubenswrapper[4959]: I0121 13:13:38.702642 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" event={"ID":"59fd8a97-0ce5-47f1-baad-32a61d41ee41","Type":"ContainerStarted","Data":"1f4667b44f70ead1ccbccb7cf656f507372de5aaabdd9aa9cb9fd97af4d82523"} Jan 21 13:13:38 crc kubenswrapper[4959]: I0121 13:13:38.722897 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" podStartSLOduration=4.722872636 podStartE2EDuration="4.722872636s" podCreationTimestamp="2026-01-21 13:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:13:38.718715301 +0000 UTC m=+279.681745844" watchObservedRunningTime="2026-01-21 13:13:38.722872636 +0000 UTC m=+279.685903209" Jan 21 13:13:39 crc kubenswrapper[4959]: I0121 13:13:39.709228 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:39 crc kubenswrapper[4959]: I0121 13:13:39.715294 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:50 crc kubenswrapper[4959]: I0121 13:13:50.738285 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 13:13:50 crc kubenswrapper[4959]: I0121 13:13:50.756763 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 13:13:51 crc kubenswrapper[4959]: I0121 13:13:51.769799 4959 generic.go:334] "Generic (PLEG): container finished" podID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerID="1bcb24eb4294b135dc425adf3e17e57e7033a4322754e44501b625ce697d3bf5" exitCode=0 Jan 21 13:13:51 crc kubenswrapper[4959]: I0121 13:13:51.769846 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" event={"ID":"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb","Type":"ContainerDied","Data":"1bcb24eb4294b135dc425adf3e17e57e7033a4322754e44501b625ce697d3bf5"} Jan 21 13:13:51 crc kubenswrapper[4959]: I0121 13:13:51.770403 4959 scope.go:117] "RemoveContainer" containerID="1bcb24eb4294b135dc425adf3e17e57e7033a4322754e44501b625ce697d3bf5" Jan 21 13:13:52 crc kubenswrapper[4959]: I0121 13:13:52.778210 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" event={"ID":"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb","Type":"ContainerStarted","Data":"47f2e3469b1551818252a1da3375c43b6962067f07526c873a436b54add87765"} Jan 21 13:13:52 crc kubenswrapper[4959]: I0121 13:13:52.778959 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:13:52 crc kubenswrapper[4959]: I0121 13:13:52.780617 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:13:53 crc kubenswrapper[4959]: I0121 13:13:53.937490 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.601429 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8"] Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.601678 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" podUID="7473609d-30e6-4896-bdd5-14b38fd71598" containerName="controller-manager" containerID="cri-o://af3fc721621c9ed0dadf84e86dd14ccc21dc3356daa0d346a532d5be5c510fa6" gracePeriod=30 Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.618212 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl"] Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.618469 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" podUID="59fd8a97-0ce5-47f1-baad-32a61d41ee41" containerName="route-controller-manager" containerID="cri-o://1f4667b44f70ead1ccbccb7cf656f507372de5aaabdd9aa9cb9fd97af4d82523" gracePeriod=30 Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.795248 4959 generic.go:334] "Generic (PLEG): container finished" podID="59fd8a97-0ce5-47f1-baad-32a61d41ee41" containerID="1f4667b44f70ead1ccbccb7cf656f507372de5aaabdd9aa9cb9fd97af4d82523" exitCode=0 Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.795333 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" event={"ID":"59fd8a97-0ce5-47f1-baad-32a61d41ee41","Type":"ContainerDied","Data":"1f4667b44f70ead1ccbccb7cf656f507372de5aaabdd9aa9cb9fd97af4d82523"} Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.798874 4959 generic.go:334] "Generic (PLEG): container finished" podID="7473609d-30e6-4896-bdd5-14b38fd71598" containerID="af3fc721621c9ed0dadf84e86dd14ccc21dc3356daa0d346a532d5be5c510fa6" exitCode=0 Jan 21 13:13:54 crc kubenswrapper[4959]: I0121 13:13:54.799075 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" event={"ID":"7473609d-30e6-4896-bdd5-14b38fd71598","Type":"ContainerDied","Data":"af3fc721621c9ed0dadf84e86dd14ccc21dc3356daa0d346a532d5be5c510fa6"} Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.112692 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.191504 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297370 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd8a97-0ce5-47f1-baad-32a61d41ee41-serving-cert\") pod \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297411 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5k5n\" (UniqueName: \"kubernetes.io/projected/7473609d-30e6-4896-bdd5-14b38fd71598-kube-api-access-f5k5n\") pod \"7473609d-30e6-4896-bdd5-14b38fd71598\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297436 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcmph\" (UniqueName: \"kubernetes.io/projected/59fd8a97-0ce5-47f1-baad-32a61d41ee41-kube-api-access-xcmph\") pod \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297481 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-config\") pod \"7473609d-30e6-4896-bdd5-14b38fd71598\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297496 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-client-ca\") pod \"7473609d-30e6-4896-bdd5-14b38fd71598\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297508 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-client-ca\") pod \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297549 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-proxy-ca-bundles\") pod \"7473609d-30e6-4896-bdd5-14b38fd71598\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297569 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-config\") pod \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\" (UID: \"59fd8a97-0ce5-47f1-baad-32a61d41ee41\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.297600 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7473609d-30e6-4896-bdd5-14b38fd71598-serving-cert\") pod \"7473609d-30e6-4896-bdd5-14b38fd71598\" (UID: \"7473609d-30e6-4896-bdd5-14b38fd71598\") " Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.299690 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-client-ca" (OuterVolumeSpecName: "client-ca") pod "7473609d-30e6-4896-bdd5-14b38fd71598" (UID: "7473609d-30e6-4896-bdd5-14b38fd71598"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.299751 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-config" (OuterVolumeSpecName: "config") pod "59fd8a97-0ce5-47f1-baad-32a61d41ee41" (UID: "59fd8a97-0ce5-47f1-baad-32a61d41ee41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.299743 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7473609d-30e6-4896-bdd5-14b38fd71598" (UID: "7473609d-30e6-4896-bdd5-14b38fd71598"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.300150 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-config" (OuterVolumeSpecName: "config") pod "7473609d-30e6-4896-bdd5-14b38fd71598" (UID: "7473609d-30e6-4896-bdd5-14b38fd71598"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.300472 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-client-ca" (OuterVolumeSpecName: "client-ca") pod "59fd8a97-0ce5-47f1-baad-32a61d41ee41" (UID: "59fd8a97-0ce5-47f1-baad-32a61d41ee41"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.302931 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7473609d-30e6-4896-bdd5-14b38fd71598-kube-api-access-f5k5n" (OuterVolumeSpecName: "kube-api-access-f5k5n") pod "7473609d-30e6-4896-bdd5-14b38fd71598" (UID: "7473609d-30e6-4896-bdd5-14b38fd71598"). InnerVolumeSpecName "kube-api-access-f5k5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.302943 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fd8a97-0ce5-47f1-baad-32a61d41ee41-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59fd8a97-0ce5-47f1-baad-32a61d41ee41" (UID: "59fd8a97-0ce5-47f1-baad-32a61d41ee41"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.302985 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fd8a97-0ce5-47f1-baad-32a61d41ee41-kube-api-access-xcmph" (OuterVolumeSpecName: "kube-api-access-xcmph") pod "59fd8a97-0ce5-47f1-baad-32a61d41ee41" (UID: "59fd8a97-0ce5-47f1-baad-32a61d41ee41"). InnerVolumeSpecName "kube-api-access-xcmph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.303617 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473609d-30e6-4896-bdd5-14b38fd71598-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7473609d-30e6-4896-bdd5-14b38fd71598" (UID: "7473609d-30e6-4896-bdd5-14b38fd71598"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399409 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399458 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399473 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399484 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7473609d-30e6-4896-bdd5-14b38fd71598-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399496 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd8a97-0ce5-47f1-baad-32a61d41ee41-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399506 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7473609d-30e6-4896-bdd5-14b38fd71598-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399517 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd8a97-0ce5-47f1-baad-32a61d41ee41-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399527 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5k5n\" (UniqueName: \"kubernetes.io/projected/7473609d-30e6-4896-bdd5-14b38fd71598-kube-api-access-f5k5n\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.399538 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcmph\" (UniqueName: \"kubernetes.io/projected/59fd8a97-0ce5-47f1-baad-32a61d41ee41-kube-api-access-xcmph\") on node \"crc\" DevicePath \"\"" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.537849 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.792103 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d"] Jan 21 13:13:55 crc kubenswrapper[4959]: E0121 13:13:55.792353 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7473609d-30e6-4896-bdd5-14b38fd71598" containerName="controller-manager" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.792365 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7473609d-30e6-4896-bdd5-14b38fd71598" containerName="controller-manager" Jan 21 13:13:55 crc kubenswrapper[4959]: E0121 13:13:55.792379 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fd8a97-0ce5-47f1-baad-32a61d41ee41" containerName="route-controller-manager" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.792387 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fd8a97-0ce5-47f1-baad-32a61d41ee41" containerName="route-controller-manager" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.792474 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="59fd8a97-0ce5-47f1-baad-32a61d41ee41" containerName="route-controller-manager" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.792490 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7473609d-30e6-4896-bdd5-14b38fd71598" containerName="controller-manager" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.793241 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.795085 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bbffbcb67-fh6rs"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.795822 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.803975 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-proxy-ca-bundles\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804028 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frwpd\" (UniqueName: \"kubernetes.io/projected/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-kube-api-access-frwpd\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804054 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-client-ca\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804077 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-serving-cert\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804105 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdh7n\" (UniqueName: \"kubernetes.io/projected/b0865a8e-0257-4e25-b82a-5fb80ee08acb-kube-api-access-fdh7n\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804143 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-config\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804199 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-client-ca\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804224 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0865a8e-0257-4e25-b82a-5fb80ee08acb-serving-cert\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.804257 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-config\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.806389 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.808765 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" event={"ID":"59fd8a97-0ce5-47f1-baad-32a61d41ee41","Type":"ContainerDied","Data":"5b23a4b40877ad9eaa04606a8cb0bc52c9959568ee51ae1c4049a23dc9e68ab7"} Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.808822 4959 scope.go:117] "RemoveContainer" containerID="1f4667b44f70ead1ccbccb7cf656f507372de5aaabdd9aa9cb9fd97af4d82523" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.808824 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.811851 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" event={"ID":"7473609d-30e6-4896-bdd5-14b38fd71598","Type":"ContainerDied","Data":"05238892f73d85b7e32ab481924ceab72a67907f3389dfaab9e3b39849f5e622"} Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.811921 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.815827 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbffbcb67-fh6rs"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.830692 4959 scope.go:117] "RemoveContainer" containerID="af3fc721621c9ed0dadf84e86dd14ccc21dc3356daa0d346a532d5be5c510fa6" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.850275 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.854757 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-9szzl"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.858319 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.861163 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-wt6k8"] Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.904943 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-config\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.904990 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-proxy-ca-bundles\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905014 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frwpd\" (UniqueName: \"kubernetes.io/projected/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-kube-api-access-frwpd\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905032 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-client-ca\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905048 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-serving-cert\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905067 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdh7n\" (UniqueName: \"kubernetes.io/projected/b0865a8e-0257-4e25-b82a-5fb80ee08acb-kube-api-access-fdh7n\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905085 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-config\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905157 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-client-ca\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.905176 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0865a8e-0257-4e25-b82a-5fb80ee08acb-serving-cert\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.906242 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-client-ca\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.906434 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-config\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.906645 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-config\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.907197 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-proxy-ca-bundles\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.907234 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-client-ca\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.909585 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0865a8e-0257-4e25-b82a-5fb80ee08acb-serving-cert\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.909663 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-serving-cert\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.936323 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frwpd\" (UniqueName: \"kubernetes.io/projected/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-kube-api-access-frwpd\") pod \"route-controller-manager-6b65d9659f-cgh4d\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:55 crc kubenswrapper[4959]: I0121 13:13:55.936782 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdh7n\" (UniqueName: \"kubernetes.io/projected/b0865a8e-0257-4e25-b82a-5fb80ee08acb-kube-api-access-fdh7n\") pod \"controller-manager-bbffbcb67-fh6rs\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.124489 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.138642 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.398278 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d"] Jan 21 13:13:56 crc kubenswrapper[4959]: W0121 13:13:56.403938 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5fdb7f9_548f_4791_b6be_9e99523a8b5f.slice/crio-f2cacb38b6ac46a828a75b7268dc1b95f019a552163d1a69b22aabab62b82b1d WatchSource:0}: Error finding container f2cacb38b6ac46a828a75b7268dc1b95f019a552163d1a69b22aabab62b82b1d: Status 404 returned error can't find the container with id f2cacb38b6ac46a828a75b7268dc1b95f019a552163d1a69b22aabab62b82b1d Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.438868 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbffbcb67-fh6rs"] Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.735328 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.818889 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" event={"ID":"b0865a8e-0257-4e25-b82a-5fb80ee08acb","Type":"ContainerStarted","Data":"f34fabcd7f103b83b7f036cfc7fb0a2e654f950b71d737aef44a9e3bba970641"} Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.818926 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" event={"ID":"b0865a8e-0257-4e25-b82a-5fb80ee08acb","Type":"ContainerStarted","Data":"a19fe38b92d0e472077fb52d57aff962f4f06c473962e668f825c53ac89a05b2"} Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.819133 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.821242 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" event={"ID":"f5fdb7f9-548f-4791-b6be-9e99523a8b5f","Type":"ContainerStarted","Data":"2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97"} Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.821278 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" event={"ID":"f5fdb7f9-548f-4791-b6be-9e99523a8b5f","Type":"ContainerStarted","Data":"f2cacb38b6ac46a828a75b7268dc1b95f019a552163d1a69b22aabab62b82b1d"} Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.821448 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.824312 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.834590 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" podStartSLOduration=2.834570979 podStartE2EDuration="2.834570979s" podCreationTimestamp="2026-01-21 13:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:13:56.831645165 +0000 UTC m=+297.794675728" watchObservedRunningTime="2026-01-21 13:13:56.834570979 +0000 UTC m=+297.797601532" Jan 21 13:13:56 crc kubenswrapper[4959]: I0121 13:13:56.877766 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" podStartSLOduration=2.877748721 podStartE2EDuration="2.877748721s" podCreationTimestamp="2026-01-21 13:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:13:56.874608571 +0000 UTC m=+297.837639114" watchObservedRunningTime="2026-01-21 13:13:56.877748721 +0000 UTC m=+297.840779264" Jan 21 13:13:57 crc kubenswrapper[4959]: I0121 13:13:57.026292 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:13:57 crc kubenswrapper[4959]: I0121 13:13:57.293369 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59fd8a97-0ce5-47f1-baad-32a61d41ee41" path="/var/lib/kubelet/pods/59fd8a97-0ce5-47f1-baad-32a61d41ee41/volumes" Jan 21 13:13:57 crc kubenswrapper[4959]: I0121 13:13:57.293905 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7473609d-30e6-4896-bdd5-14b38fd71598" path="/var/lib/kubelet/pods/7473609d-30e6-4896-bdd5-14b38fd71598/volumes" Jan 21 13:13:59 crc kubenswrapper[4959]: I0121 13:13:59.171837 4959 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 13:13:59 crc kubenswrapper[4959]: I0121 13:13:59.690182 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 13:14:00 crc kubenswrapper[4959]: I0121 13:14:00.734602 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 13:14:07 crc kubenswrapper[4959]: I0121 13:14:07.683976 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 13:14:08 crc kubenswrapper[4959]: I0121 13:14:08.539678 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 13:14:09 crc kubenswrapper[4959]: I0121 13:14:09.248626 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 13:14:11 crc kubenswrapper[4959]: I0121 13:14:11.260005 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 13:14:21 crc kubenswrapper[4959]: I0121 13:14:21.379882 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:14:21 crc kubenswrapper[4959]: I0121 13:14:21.380409 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:14:51 crc kubenswrapper[4959]: I0121 13:14:51.380118 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:14:51 crc kubenswrapper[4959]: I0121 13:14:51.380660 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.776528 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gq9bw"] Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.777690 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.787174 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gq9bw"] Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891232 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891297 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-trusted-ca\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891338 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891368 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzlx\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-kube-api-access-5gzlx\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891428 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891452 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-bound-sa-token\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891484 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-registry-tls\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.891539 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-registry-certificates\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.916437 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992434 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-registry-tls\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992533 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-registry-certificates\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992558 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992582 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-trusted-ca\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992617 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992647 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzlx\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-kube-api-access-5gzlx\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.992676 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-bound-sa-token\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.994278 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.994837 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-registry-certificates\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.995381 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-trusted-ca\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:52 crc kubenswrapper[4959]: I0121 13:14:52.999495 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:53 crc kubenswrapper[4959]: I0121 13:14:53.003113 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-registry-tls\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:53 crc kubenswrapper[4959]: I0121 13:14:53.010695 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-bound-sa-token\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:53 crc kubenswrapper[4959]: I0121 13:14:53.012293 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzlx\" (UniqueName: \"kubernetes.io/projected/16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13-kube-api-access-5gzlx\") pod \"image-registry-66df7c8f76-gq9bw\" (UID: \"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13\") " pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:53 crc kubenswrapper[4959]: I0121 13:14:53.095318 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:53 crc kubenswrapper[4959]: I0121 13:14:53.487223 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gq9bw"] Jan 21 13:14:53 crc kubenswrapper[4959]: W0121 13:14:53.494176 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fac9a9_1b4f_40f9_b6b8_b5ef30b58d13.slice/crio-02d75a288fab4ca581d03d4d4a99cdbf80f273211aaf2cbaeaef2825272fec95 WatchSource:0}: Error finding container 02d75a288fab4ca581d03d4d4a99cdbf80f273211aaf2cbaeaef2825272fec95: Status 404 returned error can't find the container with id 02d75a288fab4ca581d03d4d4a99cdbf80f273211aaf2cbaeaef2825272fec95 Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.117304 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" event={"ID":"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13","Type":"ContainerStarted","Data":"06d208a66bce878ee77cc4594e33df68c48f16d6ee980fdca8c2f8da64e0eec2"} Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.117638 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" event={"ID":"16fac9a9-1b4f-40f9-b6b8-b5ef30b58d13","Type":"ContainerStarted","Data":"02d75a288fab4ca581d03d4d4a99cdbf80f273211aaf2cbaeaef2825272fec95"} Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.117654 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.140158 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" podStartSLOduration=2.140136562 podStartE2EDuration="2.140136562s" podCreationTimestamp="2026-01-21 13:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:14:54.134910665 +0000 UTC m=+355.097941228" watchObservedRunningTime="2026-01-21 13:14:54.140136562 +0000 UTC m=+355.103167135" Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.618664 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bbffbcb67-fh6rs"] Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.619108 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" podUID="b0865a8e-0257-4e25-b82a-5fb80ee08acb" containerName="controller-manager" containerID="cri-o://f34fabcd7f103b83b7f036cfc7fb0a2e654f950b71d737aef44a9e3bba970641" gracePeriod=30 Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.634247 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d"] Jan 21 13:14:54 crc kubenswrapper[4959]: I0121 13:14:54.634491 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" podUID="f5fdb7f9-548f-4791-b6be-9e99523a8b5f" containerName="route-controller-manager" containerID="cri-o://2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97" gracePeriod=30 Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.125974 4959 generic.go:334] "Generic (PLEG): container finished" podID="b0865a8e-0257-4e25-b82a-5fb80ee08acb" containerID="f34fabcd7f103b83b7f036cfc7fb0a2e654f950b71d737aef44a9e3bba970641" exitCode=0 Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.126760 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" event={"ID":"b0865a8e-0257-4e25-b82a-5fb80ee08acb","Type":"ContainerDied","Data":"f34fabcd7f103b83b7f036cfc7fb0a2e654f950b71d737aef44a9e3bba970641"} Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.524022 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.531621 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628011 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-config\") pod \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628072 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-client-ca\") pod \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628120 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdh7n\" (UniqueName: \"kubernetes.io/projected/b0865a8e-0257-4e25-b82a-5fb80ee08acb-kube-api-access-fdh7n\") pod \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628174 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frwpd\" (UniqueName: \"kubernetes.io/projected/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-kube-api-access-frwpd\") pod \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628199 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-serving-cert\") pod \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628236 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-config\") pod \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628284 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-proxy-ca-bundles\") pod \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628341 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-client-ca\") pod \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\" (UID: \"f5fdb7f9-548f-4791-b6be-9e99523a8b5f\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628408 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0865a8e-0257-4e25-b82a-5fb80ee08acb-serving-cert\") pod \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\" (UID: \"b0865a8e-0257-4e25-b82a-5fb80ee08acb\") " Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628918 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-config" (OuterVolumeSpecName: "config") pod "f5fdb7f9-548f-4791-b6be-9e99523a8b5f" (UID: "f5fdb7f9-548f-4791-b6be-9e99523a8b5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.628973 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0865a8e-0257-4e25-b82a-5fb80ee08acb" (UID: "b0865a8e-0257-4e25-b82a-5fb80ee08acb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.630033 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f5fdb7f9-548f-4791-b6be-9e99523a8b5f" (UID: "f5fdb7f9-548f-4791-b6be-9e99523a8b5f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.630735 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b0865a8e-0257-4e25-b82a-5fb80ee08acb" (UID: "b0865a8e-0257-4e25-b82a-5fb80ee08acb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.631336 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-config" (OuterVolumeSpecName: "config") pod "b0865a8e-0257-4e25-b82a-5fb80ee08acb" (UID: "b0865a8e-0257-4e25-b82a-5fb80ee08acb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.633841 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f5fdb7f9-548f-4791-b6be-9e99523a8b5f" (UID: "f5fdb7f9-548f-4791-b6be-9e99523a8b5f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.633865 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0865a8e-0257-4e25-b82a-5fb80ee08acb-kube-api-access-fdh7n" (OuterVolumeSpecName: "kube-api-access-fdh7n") pod "b0865a8e-0257-4e25-b82a-5fb80ee08acb" (UID: "b0865a8e-0257-4e25-b82a-5fb80ee08acb"). InnerVolumeSpecName "kube-api-access-fdh7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.634389 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-kube-api-access-frwpd" (OuterVolumeSpecName: "kube-api-access-frwpd") pod "f5fdb7f9-548f-4791-b6be-9e99523a8b5f" (UID: "f5fdb7f9-548f-4791-b6be-9e99523a8b5f"). InnerVolumeSpecName "kube-api-access-frwpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.643405 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0865a8e-0257-4e25-b82a-5fb80ee08acb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0865a8e-0257-4e25-b82a-5fb80ee08acb" (UID: "b0865a8e-0257-4e25-b82a-5fb80ee08acb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730041 4959 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730079 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730089 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0865a8e-0257-4e25-b82a-5fb80ee08acb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730122 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730135 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdh7n\" (UniqueName: \"kubernetes.io/projected/b0865a8e-0257-4e25-b82a-5fb80ee08acb-kube-api-access-fdh7n\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730147 4959 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730160 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frwpd\" (UniqueName: \"kubernetes.io/projected/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-kube-api-access-frwpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730173 4959 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fdb7f9-548f-4791-b6be-9e99523a8b5f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.730184 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0865a8e-0257-4e25-b82a-5fb80ee08acb-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.829930 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct"] Jan 21 13:14:55 crc kubenswrapper[4959]: E0121 13:14:55.830212 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fdb7f9-548f-4791-b6be-9e99523a8b5f" containerName="route-controller-manager" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.830227 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fdb7f9-548f-4791-b6be-9e99523a8b5f" containerName="route-controller-manager" Jan 21 13:14:55 crc kubenswrapper[4959]: E0121 13:14:55.830240 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0865a8e-0257-4e25-b82a-5fb80ee08acb" containerName="controller-manager" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.830248 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0865a8e-0257-4e25-b82a-5fb80ee08acb" containerName="controller-manager" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.830373 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fdb7f9-548f-4791-b6be-9e99523a8b5f" containerName="route-controller-manager" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.830394 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0865a8e-0257-4e25-b82a-5fb80ee08acb" containerName="controller-manager" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.830956 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.834146 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v"] Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.834868 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.843146 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct"] Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.848123 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v"] Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.932579 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-proxy-ca-bundles\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.932666 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-client-ca\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.932725 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-config\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.932797 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wltx\" (UniqueName: \"kubernetes.io/projected/07a31fad-4270-47cf-8664-9eb52447fe1a-kube-api-access-6wltx\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.932840 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a31fad-4270-47cf-8664-9eb52447fe1a-client-ca\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.932950 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a31fad-4270-47cf-8664-9eb52447fe1a-config\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.933007 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtz5\" (UniqueName: \"kubernetes.io/projected/58ca3a78-e2e2-4383-9f9c-7fa37139f970-kube-api-access-9xtz5\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.933042 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a31fad-4270-47cf-8664-9eb52447fe1a-serving-cert\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:55 crc kubenswrapper[4959]: I0121 13:14:55.933158 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ca3a78-e2e2-4383-9f9c-7fa37139f970-serving-cert\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034486 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a31fad-4270-47cf-8664-9eb52447fe1a-client-ca\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034579 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a31fad-4270-47cf-8664-9eb52447fe1a-config\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034615 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtz5\" (UniqueName: \"kubernetes.io/projected/58ca3a78-e2e2-4383-9f9c-7fa37139f970-kube-api-access-9xtz5\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034632 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a31fad-4270-47cf-8664-9eb52447fe1a-serving-cert\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034656 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ca3a78-e2e2-4383-9f9c-7fa37139f970-serving-cert\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034701 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-proxy-ca-bundles\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034718 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-config\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034733 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-client-ca\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.034754 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wltx\" (UniqueName: \"kubernetes.io/projected/07a31fad-4270-47cf-8664-9eb52447fe1a-kube-api-access-6wltx\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.035888 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07a31fad-4270-47cf-8664-9eb52447fe1a-client-ca\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.035908 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-client-ca\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.036079 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07a31fad-4270-47cf-8664-9eb52447fe1a-config\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.036116 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-proxy-ca-bundles\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.036980 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ca3a78-e2e2-4383-9f9c-7fa37139f970-config\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.039231 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07a31fad-4270-47cf-8664-9eb52447fe1a-serving-cert\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.039271 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ca3a78-e2e2-4383-9f9c-7fa37139f970-serving-cert\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.052161 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wltx\" (UniqueName: \"kubernetes.io/projected/07a31fad-4270-47cf-8664-9eb52447fe1a-kube-api-access-6wltx\") pod \"route-controller-manager-798f7896f6-x6rct\" (UID: \"07a31fad-4270-47cf-8664-9eb52447fe1a\") " pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.057364 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtz5\" (UniqueName: \"kubernetes.io/projected/58ca3a78-e2e2-4383-9f9c-7fa37139f970-kube-api-access-9xtz5\") pod \"controller-manager-c4ff7d7c6-qlz5v\" (UID: \"58ca3a78-e2e2-4383-9f9c-7fa37139f970\") " pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.132703 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" event={"ID":"b0865a8e-0257-4e25-b82a-5fb80ee08acb","Type":"ContainerDied","Data":"a19fe38b92d0e472077fb52d57aff962f4f06c473962e668f825c53ac89a05b2"} Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.132760 4959 scope.go:117] "RemoveContainer" containerID="f34fabcd7f103b83b7f036cfc7fb0a2e654f950b71d737aef44a9e3bba970641" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.132886 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbffbcb67-fh6rs" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.145159 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.145273 4959 generic.go:334] "Generic (PLEG): container finished" podID="f5fdb7f9-548f-4791-b6be-9e99523a8b5f" containerID="2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97" exitCode=0 Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.145356 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" event={"ID":"f5fdb7f9-548f-4791-b6be-9e99523a8b5f","Type":"ContainerDied","Data":"2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97"} Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.145411 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d" event={"ID":"f5fdb7f9-548f-4791-b6be-9e99523a8b5f","Type":"ContainerDied","Data":"f2cacb38b6ac46a828a75b7268dc1b95f019a552163d1a69b22aabab62b82b1d"} Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.147972 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.158240 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.170400 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bbffbcb67-fh6rs"] Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.181274 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bbffbcb67-fh6rs"] Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.185062 4959 scope.go:117] "RemoveContainer" containerID="2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.185847 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d"] Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.192868 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b65d9659f-cgh4d"] Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.207735 4959 scope.go:117] "RemoveContainer" containerID="2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97" Jan 21 13:14:56 crc kubenswrapper[4959]: E0121 13:14:56.208293 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97\": container with ID starting with 2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97 not found: ID does not exist" containerID="2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.208332 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97"} err="failed to get container status \"2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97\": rpc error: code = NotFound desc = could not find container \"2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97\": container with ID starting with 2311060b8854a8035a84e43afe865272d3588967978ddbc25b21a7bc68cb6a97 not found: ID does not exist" Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.654311 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v"] Jan 21 13:14:56 crc kubenswrapper[4959]: I0121 13:14:56.662505 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct"] Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.150932 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" event={"ID":"58ca3a78-e2e2-4383-9f9c-7fa37139f970","Type":"ContainerStarted","Data":"dc4c72913d5145d480781a8c8ddcf9de7694c7a94648d9473ea4804336f20647"} Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.151349 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" event={"ID":"58ca3a78-e2e2-4383-9f9c-7fa37139f970","Type":"ContainerStarted","Data":"52bd128652ff5e1c13bc8fcbf3d88415fd99879841bf811641a2b8893c6eaeb3"} Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.151792 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.156356 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" event={"ID":"07a31fad-4270-47cf-8664-9eb52447fe1a","Type":"ContainerStarted","Data":"3d08d3cf10a000e763fcb963a40e5fd4b0eef41a10433a23392ae5d09350b535"} Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.156404 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" event={"ID":"07a31fad-4270-47cf-8664-9eb52447fe1a","Type":"ContainerStarted","Data":"3346295339d22885f1b462efa76eb7c1a0d08db67339197ea6e993c2f522f1ae"} Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.157228 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.159453 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.164803 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.177142 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c4ff7d7c6-qlz5v" podStartSLOduration=3.177122945 podStartE2EDuration="3.177122945s" podCreationTimestamp="2026-01-21 13:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:14:57.176579681 +0000 UTC m=+358.139610234" watchObservedRunningTime="2026-01-21 13:14:57.177122945 +0000 UTC m=+358.140153488" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.229911 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-798f7896f6-x6rct" podStartSLOduration=3.229892689 podStartE2EDuration="3.229892689s" podCreationTimestamp="2026-01-21 13:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:14:57.226407158 +0000 UTC m=+358.189437711" watchObservedRunningTime="2026-01-21 13:14:57.229892689 +0000 UTC m=+358.192923222" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.308718 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0865a8e-0257-4e25-b82a-5fb80ee08acb" path="/var/lib/kubelet/pods/b0865a8e-0257-4e25-b82a-5fb80ee08acb/volumes" Jan 21 13:14:57 crc kubenswrapper[4959]: I0121 13:14:57.309221 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fdb7f9-548f-4791-b6be-9e99523a8b5f" path="/var/lib/kubelet/pods/f5fdb7f9-548f-4791-b6be-9e99523a8b5f/volumes" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.176910 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw"] Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.181206 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.184208 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.184476 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.198134 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw"] Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.294578 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82b6db65-10de-4913-a8f6-b9040c016760-config-volume\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.294665 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82b6db65-10de-4913-a8f6-b9040c016760-secret-volume\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.294684 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mvdf\" (UniqueName: \"kubernetes.io/projected/82b6db65-10de-4913-a8f6-b9040c016760-kube-api-access-6mvdf\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.395807 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82b6db65-10de-4913-a8f6-b9040c016760-secret-volume\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.395870 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mvdf\" (UniqueName: \"kubernetes.io/projected/82b6db65-10de-4913-a8f6-b9040c016760-kube-api-access-6mvdf\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.395965 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82b6db65-10de-4913-a8f6-b9040c016760-config-volume\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.396991 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82b6db65-10de-4913-a8f6-b9040c016760-config-volume\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.404827 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82b6db65-10de-4913-a8f6-b9040c016760-secret-volume\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.414965 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mvdf\" (UniqueName: \"kubernetes.io/projected/82b6db65-10de-4913-a8f6-b9040c016760-kube-api-access-6mvdf\") pod \"collect-profiles-29483355-wq7fw\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.497677 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:00 crc kubenswrapper[4959]: W0121 13:15:00.895157 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82b6db65_10de_4913_a8f6_b9040c016760.slice/crio-f7b6f330d8c5c4f77639cfb9bb33ad83f72a77162347e009a2ca6c9d0bc537e2 WatchSource:0}: Error finding container f7b6f330d8c5c4f77639cfb9bb33ad83f72a77162347e009a2ca6c9d0bc537e2: Status 404 returned error can't find the container with id f7b6f330d8c5c4f77639cfb9bb33ad83f72a77162347e009a2ca6c9d0bc537e2 Jan 21 13:15:00 crc kubenswrapper[4959]: I0121 13:15:00.906714 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw"] Jan 21 13:15:01 crc kubenswrapper[4959]: I0121 13:15:01.192594 4959 generic.go:334] "Generic (PLEG): container finished" podID="82b6db65-10de-4913-a8f6-b9040c016760" containerID="bb43e52d206f37570ae97f69ad2c6ba727f2bf0c3942d3f72de2bcdddc2c2f38" exitCode=0 Jan 21 13:15:01 crc kubenswrapper[4959]: I0121 13:15:01.192647 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" event={"ID":"82b6db65-10de-4913-a8f6-b9040c016760","Type":"ContainerDied","Data":"bb43e52d206f37570ae97f69ad2c6ba727f2bf0c3942d3f72de2bcdddc2c2f38"} Jan 21 13:15:01 crc kubenswrapper[4959]: I0121 13:15:01.192683 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" event={"ID":"82b6db65-10de-4913-a8f6-b9040c016760","Type":"ContainerStarted","Data":"f7b6f330d8c5c4f77639cfb9bb33ad83f72a77162347e009a2ca6c9d0bc537e2"} Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.526219 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.627465 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82b6db65-10de-4913-a8f6-b9040c016760-secret-volume\") pod \"82b6db65-10de-4913-a8f6-b9040c016760\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.627576 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82b6db65-10de-4913-a8f6-b9040c016760-config-volume\") pod \"82b6db65-10de-4913-a8f6-b9040c016760\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.627655 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mvdf\" (UniqueName: \"kubernetes.io/projected/82b6db65-10de-4913-a8f6-b9040c016760-kube-api-access-6mvdf\") pod \"82b6db65-10de-4913-a8f6-b9040c016760\" (UID: \"82b6db65-10de-4913-a8f6-b9040c016760\") " Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.628606 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b6db65-10de-4913-a8f6-b9040c016760-config-volume" (OuterVolumeSpecName: "config-volume") pod "82b6db65-10de-4913-a8f6-b9040c016760" (UID: "82b6db65-10de-4913-a8f6-b9040c016760"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.633406 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b6db65-10de-4913-a8f6-b9040c016760-kube-api-access-6mvdf" (OuterVolumeSpecName: "kube-api-access-6mvdf") pod "82b6db65-10de-4913-a8f6-b9040c016760" (UID: "82b6db65-10de-4913-a8f6-b9040c016760"). InnerVolumeSpecName "kube-api-access-6mvdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.633560 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b6db65-10de-4913-a8f6-b9040c016760-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82b6db65-10de-4913-a8f6-b9040c016760" (UID: "82b6db65-10de-4913-a8f6-b9040c016760"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.729420 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82b6db65-10de-4913-a8f6-b9040c016760-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.729463 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82b6db65-10de-4913-a8f6-b9040c016760-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:02 crc kubenswrapper[4959]: I0121 13:15:02.729479 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mvdf\" (UniqueName: \"kubernetes.io/projected/82b6db65-10de-4913-a8f6-b9040c016760-kube-api-access-6mvdf\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4959]: I0121 13:15:03.210670 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" event={"ID":"82b6db65-10de-4913-a8f6-b9040c016760","Type":"ContainerDied","Data":"f7b6f330d8c5c4f77639cfb9bb33ad83f72a77162347e009a2ca6c9d0bc537e2"} Jan 21 13:15:03 crc kubenswrapper[4959]: I0121 13:15:03.211164 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7b6f330d8c5c4f77639cfb9bb33ad83f72a77162347e009a2ca6c9d0bc537e2" Jan 21 13:15:03 crc kubenswrapper[4959]: I0121 13:15:03.210737 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw" Jan 21 13:15:13 crc kubenswrapper[4959]: I0121 13:15:13.103218 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gq9bw" Jan 21 13:15:13 crc kubenswrapper[4959]: I0121 13:15:13.162366 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xlvm4"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.005030 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv8w6"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.006063 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tv8w6" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="registry-server" containerID="cri-o://1db44a9ed4e8dad5b13fd77972561461c97338f32aa6c92ecd13f1b29c9badf2" gracePeriod=30 Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.016360 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x66q8"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.016659 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x66q8" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="registry-server" containerID="cri-o://41cbf7932040d1e18c3d7f8e19c134f8e8b913f4dc4f25472721988c6e3574a8" gracePeriod=30 Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.032231 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr76d"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.032499 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" containerID="cri-o://47f2e3469b1551818252a1da3375c43b6962067f07526c873a436b54add87765" gracePeriod=30 Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.043693 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhqcb"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.044061 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qhqcb" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="registry-server" containerID="cri-o://a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36" gracePeriod=30 Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.052763 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72ps5"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.053027 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-72ps5" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="registry-server" containerID="cri-o://bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7" gracePeriod=30 Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.061952 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khptd"] Jan 21 13:15:21 crc kubenswrapper[4959]: E0121 13:15:21.062330 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b6db65-10de-4913-a8f6-b9040c016760" containerName="collect-profiles" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.062350 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b6db65-10de-4913-a8f6-b9040c016760" containerName="collect-profiles" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.062481 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b6db65-10de-4913-a8f6-b9040c016760" containerName="collect-profiles" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.063018 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.068934 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khptd"] Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.179198 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hf5c\" (UniqueName: \"kubernetes.io/projected/893eff4f-b820-41bf-9278-3c7daaeeb0b7-kube-api-access-5hf5c\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.179348 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/893eff4f-b820-41bf-9278-3c7daaeeb0b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.179464 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/893eff4f-b820-41bf-9278-3c7daaeeb0b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.280568 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/893eff4f-b820-41bf-9278-3c7daaeeb0b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.280631 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hf5c\" (UniqueName: \"kubernetes.io/projected/893eff4f-b820-41bf-9278-3c7daaeeb0b7-kube-api-access-5hf5c\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.280671 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/893eff4f-b820-41bf-9278-3c7daaeeb0b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.281698 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/893eff4f-b820-41bf-9278-3c7daaeeb0b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.286682 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/893eff4f-b820-41bf-9278-3c7daaeeb0b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.297646 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hf5c\" (UniqueName: \"kubernetes.io/projected/893eff4f-b820-41bf-9278-3c7daaeeb0b7-kube-api-access-5hf5c\") pod \"marketplace-operator-79b997595-khptd\" (UID: \"893eff4f-b820-41bf-9278-3c7daaeeb0b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.379923 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.379989 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.380035 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.380583 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13642c440229a641b98715e5b86f12964d19facd8a63f7a2ba469a1067d57fdf"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.380639 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://13642c440229a641b98715e5b86f12964d19facd8a63f7a2ba469a1067d57fdf" gracePeriod=600 Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.381374 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:21 crc kubenswrapper[4959]: I0121 13:15:21.795694 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khptd"] Jan 21 13:15:22 crc kubenswrapper[4959]: E0121 13:15:22.044196 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36 is running failed: container process not found" containerID="a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:15:22 crc kubenswrapper[4959]: E0121 13:15:22.045270 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36 is running failed: container process not found" containerID="a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:15:22 crc kubenswrapper[4959]: E0121 13:15:22.045853 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36 is running failed: container process not found" containerID="a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:15:22 crc kubenswrapper[4959]: E0121 13:15:22.045893 4959 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qhqcb" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="registry-server" Jan 21 13:15:22 crc kubenswrapper[4959]: I0121 13:15:22.337814 4959 generic.go:334] "Generic (PLEG): container finished" podID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerID="1db44a9ed4e8dad5b13fd77972561461c97338f32aa6c92ecd13f1b29c9badf2" exitCode=0 Jan 21 13:15:22 crc kubenswrapper[4959]: I0121 13:15:22.337863 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv8w6" event={"ID":"ea3b719b-be6f-4a11-a13c-ba1bfca953a7","Type":"ContainerDied","Data":"1db44a9ed4e8dad5b13fd77972561461c97338f32aa6c92ecd13f1b29c9badf2"} Jan 21 13:15:22 crc kubenswrapper[4959]: I0121 13:15:22.340263 4959 generic.go:334] "Generic (PLEG): container finished" podID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerID="a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36" exitCode=0 Jan 21 13:15:22 crc kubenswrapper[4959]: I0121 13:15:22.340376 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhqcb" event={"ID":"0220f7cc-761e-4995-aa56-6c543cd5a294","Type":"ContainerDied","Data":"a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36"} Jan 21 13:15:22 crc kubenswrapper[4959]: I0121 13:15:22.341364 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" event={"ID":"893eff4f-b820-41bf-9278-3c7daaeeb0b7","Type":"ContainerStarted","Data":"7ea671a6c997791c354cd106fcd29dcb346e83d752aa52cc3f42efbc7363ee9d"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.036417 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.041682 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.102771 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-catalog-content\") pod \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.102823 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-utilities\") pod \"0220f7cc-761e-4995-aa56-6c543cd5a294\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.102849 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtncs\" (UniqueName: \"kubernetes.io/projected/0220f7cc-761e-4995-aa56-6c543cd5a294-kube-api-access-qtncs\") pod \"0220f7cc-761e-4995-aa56-6c543cd5a294\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.102910 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-catalog-content\") pod \"0220f7cc-761e-4995-aa56-6c543cd5a294\" (UID: \"0220f7cc-761e-4995-aa56-6c543cd5a294\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.102952 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-utilities\") pod \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.102975 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vpg4\" (UniqueName: \"kubernetes.io/projected/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-kube-api-access-5vpg4\") pod \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\" (UID: \"ea3b719b-be6f-4a11-a13c-ba1bfca953a7\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.104050 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-utilities" (OuterVolumeSpecName: "utilities") pod "ea3b719b-be6f-4a11-a13c-ba1bfca953a7" (UID: "ea3b719b-be6f-4a11-a13c-ba1bfca953a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.104614 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-utilities" (OuterVolumeSpecName: "utilities") pod "0220f7cc-761e-4995-aa56-6c543cd5a294" (UID: "0220f7cc-761e-4995-aa56-6c543cd5a294"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.109042 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0220f7cc-761e-4995-aa56-6c543cd5a294-kube-api-access-qtncs" (OuterVolumeSpecName: "kube-api-access-qtncs") pod "0220f7cc-761e-4995-aa56-6c543cd5a294" (UID: "0220f7cc-761e-4995-aa56-6c543cd5a294"). InnerVolumeSpecName "kube-api-access-qtncs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.109428 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-kube-api-access-5vpg4" (OuterVolumeSpecName: "kube-api-access-5vpg4") pod "ea3b719b-be6f-4a11-a13c-ba1bfca953a7" (UID: "ea3b719b-be6f-4a11-a13c-ba1bfca953a7"). InnerVolumeSpecName "kube-api-access-5vpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.133671 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0220f7cc-761e-4995-aa56-6c543cd5a294" (UID: "0220f7cc-761e-4995-aa56-6c543cd5a294"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.173347 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea3b719b-be6f-4a11-a13c-ba1bfca953a7" (UID: "ea3b719b-be6f-4a11-a13c-ba1bfca953a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.204278 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.204314 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.204326 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtncs\" (UniqueName: \"kubernetes.io/projected/0220f7cc-761e-4995-aa56-6c543cd5a294-kube-api-access-qtncs\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.204339 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0220f7cc-761e-4995-aa56-6c543cd5a294-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.204349 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.204359 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vpg4\" (UniqueName: \"kubernetes.io/projected/ea3b719b-be6f-4a11-a13c-ba1bfca953a7-kube-api-access-5vpg4\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: E0121 13:15:23.234577 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7 is running failed: container process not found" containerID="bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:15:23 crc kubenswrapper[4959]: E0121 13:15:23.235044 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7 is running failed: container process not found" containerID="bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:15:23 crc kubenswrapper[4959]: E0121 13:15:23.235373 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7 is running failed: container process not found" containerID="bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:15:23 crc kubenswrapper[4959]: E0121 13:15:23.235411 4959 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-72ps5" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="registry-server" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.352120 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" event={"ID":"893eff4f-b820-41bf-9278-3c7daaeeb0b7","Type":"ContainerStarted","Data":"3eaee9b9744d67f8feaf29c3ee4d473e7f5e107af73ba42075218dfd915d6fba"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.354053 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.357983 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.358851 4959 generic.go:334] "Generic (PLEG): container finished" podID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerID="41cbf7932040d1e18c3d7f8e19c134f8e8b913f4dc4f25472721988c6e3574a8" exitCode=0 Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.358911 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerDied","Data":"41cbf7932040d1e18c3d7f8e19c134f8e8b913f4dc4f25472721988c6e3574a8"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.363262 4959 generic.go:334] "Generic (PLEG): container finished" podID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerID="47f2e3469b1551818252a1da3375c43b6962067f07526c873a436b54add87765" exitCode=0 Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.363390 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" event={"ID":"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb","Type":"ContainerDied","Data":"47f2e3469b1551818252a1da3375c43b6962067f07526c873a436b54add87765"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.363488 4959 scope.go:117] "RemoveContainer" containerID="1bcb24eb4294b135dc425adf3e17e57e7033a4322754e44501b625ce697d3bf5" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.385147 4959 generic.go:334] "Generic (PLEG): container finished" podID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerID="bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7" exitCode=0 Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.385207 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerDied","Data":"bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.389676 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-khptd" podStartSLOduration=2.38965794 podStartE2EDuration="2.38965794s" podCreationTimestamp="2026-01-21 13:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:15:23.385007918 +0000 UTC m=+384.348038461" watchObservedRunningTime="2026-01-21 13:15:23.38965794 +0000 UTC m=+384.352688483" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.394290 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="13642c440229a641b98715e5b86f12964d19facd8a63f7a2ba469a1067d57fdf" exitCode=0 Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.394402 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"13642c440229a641b98715e5b86f12964d19facd8a63f7a2ba469a1067d57fdf"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.394432 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"aad89835c8a01e37a654d9249967f8eba913bfa1b726fe57032b28a559caff14"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.399015 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv8w6" event={"ID":"ea3b719b-be6f-4a11-a13c-ba1bfca953a7","Type":"ContainerDied","Data":"6653cab18475ee956ce9996bd27de8711a19eb16eecd39a28ba6adbe5253d337"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.399116 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv8w6" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.413906 4959 scope.go:117] "RemoveContainer" containerID="8df947f9fb8830e6d075f0bddf0533f924baaabee81a9adc4b940146606c1d91" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.414504 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhqcb" event={"ID":"0220f7cc-761e-4995-aa56-6c543cd5a294","Type":"ContainerDied","Data":"d77f219b521d3dbcf56882168c812dd6b036a83ea0913c3a04bde7f504fb21c0"} Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.414640 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhqcb" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.436609 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv8w6"] Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.440233 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tv8w6"] Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.466026 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhqcb"] Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.469580 4959 scope.go:117] "RemoveContainer" containerID="1db44a9ed4e8dad5b13fd77972561461c97338f32aa6c92ecd13f1b29c9badf2" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.473947 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhqcb"] Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.499286 4959 scope.go:117] "RemoveContainer" containerID="f1bc6f24ffaa4fdd77578fd04546f77e62289a8fe0895f48d6c6d8c9b4bee1d0" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.524913 4959 scope.go:117] "RemoveContainer" containerID="e42180c242e4e5f98347333721fc50fcda28ddeb860f7559fcd266cc7059c9c9" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.575830 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.578999 4959 scope.go:117] "RemoveContainer" containerID="a155a81db5117bbcfbefe29017ef6bad773905d9829aae91e6aa5681405b4a36" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.605088 4959 scope.go:117] "RemoveContainer" containerID="b835b0db62594ce54a128e491431ed322b4f0fd9531a1dc7ab1ab01f82a727cc" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.619691 4959 scope.go:117] "RemoveContainer" containerID="b389f490a63a4632d398622a5b833fd036b8f1775631635c4d51a624ccb8f7d5" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.636908 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.667086 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.711233 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-catalog-content\") pod \"8899f354-3f43-4222-88a9-221ca1a6dc6e\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.711318 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-utilities\") pod \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.711383 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5mgl\" (UniqueName: \"kubernetes.io/projected/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-kube-api-access-l5mgl\") pod \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.711421 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrbkb\" (UniqueName: \"kubernetes.io/projected/c01bf13b-8ada-46de-a969-cb5691c8d1c0-kube-api-access-xrbkb\") pod \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.711457 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-catalog-content\") pod \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\" (UID: \"c01bf13b-8ada-46de-a969-cb5691c8d1c0\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.712473 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-utilities" (OuterVolumeSpecName: "utilities") pod "c01bf13b-8ada-46de-a969-cb5691c8d1c0" (UID: "c01bf13b-8ada-46de-a969-cb5691c8d1c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.715169 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01bf13b-8ada-46de-a969-cb5691c8d1c0-kube-api-access-xrbkb" (OuterVolumeSpecName: "kube-api-access-xrbkb") pod "c01bf13b-8ada-46de-a969-cb5691c8d1c0" (UID: "c01bf13b-8ada-46de-a969-cb5691c8d1c0"). InnerVolumeSpecName "kube-api-access-xrbkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.715401 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-kube-api-access-l5mgl" (OuterVolumeSpecName: "kube-api-access-l5mgl") pod "f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" (UID: "f29bfd9b-784d-46af-a90a-47ab4f2c5dfb"). InnerVolumeSpecName "kube-api-access-l5mgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.715728 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" (UID: "f29bfd9b-784d-46af-a90a-47ab4f2c5dfb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.711483 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-operator-metrics\") pod \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.718612 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-trusted-ca\") pod \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\" (UID: \"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.718672 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzf5\" (UniqueName: \"kubernetes.io/projected/8899f354-3f43-4222-88a9-221ca1a6dc6e-kube-api-access-tgzf5\") pod \"8899f354-3f43-4222-88a9-221ca1a6dc6e\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.718706 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-utilities\") pod \"8899f354-3f43-4222-88a9-221ca1a6dc6e\" (UID: \"8899f354-3f43-4222-88a9-221ca1a6dc6e\") " Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.719185 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.719213 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5mgl\" (UniqueName: \"kubernetes.io/projected/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-kube-api-access-l5mgl\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.719226 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrbkb\" (UniqueName: \"kubernetes.io/projected/c01bf13b-8ada-46de-a969-cb5691c8d1c0-kube-api-access-xrbkb\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.719239 4959 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.719366 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" (UID: "f29bfd9b-784d-46af-a90a-47ab4f2c5dfb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.720068 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-utilities" (OuterVolumeSpecName: "utilities") pod "8899f354-3f43-4222-88a9-221ca1a6dc6e" (UID: "8899f354-3f43-4222-88a9-221ca1a6dc6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.722301 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8899f354-3f43-4222-88a9-221ca1a6dc6e-kube-api-access-tgzf5" (OuterVolumeSpecName: "kube-api-access-tgzf5") pod "8899f354-3f43-4222-88a9-221ca1a6dc6e" (UID: "8899f354-3f43-4222-88a9-221ca1a6dc6e"). InnerVolumeSpecName "kube-api-access-tgzf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.766481 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c01bf13b-8ada-46de-a969-cb5691c8d1c0" (UID: "c01bf13b-8ada-46de-a969-cb5691c8d1c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.820508 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzf5\" (UniqueName: \"kubernetes.io/projected/8899f354-3f43-4222-88a9-221ca1a6dc6e-kube-api-access-tgzf5\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.820534 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.820572 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01bf13b-8ada-46de-a969-cb5691c8d1c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.820581 4959 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.829518 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8899f354-3f43-4222-88a9-221ca1a6dc6e" (UID: "8899f354-3f43-4222-88a9-221ca1a6dc6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:23 crc kubenswrapper[4959]: I0121 13:15:23.922030 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8899f354-3f43-4222-88a9-221ca1a6dc6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.422990 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x66q8" event={"ID":"c01bf13b-8ada-46de-a969-cb5691c8d1c0","Type":"ContainerDied","Data":"6c6b80fe0639faca2a7e90675cce7055edbcc1539397338db1c91e7e773a2e62"} Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.423021 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x66q8" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.423049 4959 scope.go:117] "RemoveContainer" containerID="41cbf7932040d1e18c3d7f8e19c134f8e8b913f4dc4f25472721988c6e3574a8" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.428676 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.428680 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr76d" event={"ID":"f29bfd9b-784d-46af-a90a-47ab4f2c5dfb","Type":"ContainerDied","Data":"228b8dc7269f67c1f5c915db6ccc26e6661b29addfc408e6aca3b907ddbbe3f5"} Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.430726 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-72ps5" event={"ID":"8899f354-3f43-4222-88a9-221ca1a6dc6e","Type":"ContainerDied","Data":"03c305726eaa9dc01299479238e6f863d82a0905e047d359300d9cbed67a0078"} Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.430884 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-72ps5" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.439953 4959 scope.go:117] "RemoveContainer" containerID="c47e3b0b59d0c55ce70154c8e0c1a2ac605cfce474328cbb9ab8f972713d4985" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.461453 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x66q8"] Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.465339 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x66q8"] Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.488522 4959 scope.go:117] "RemoveContainer" containerID="e769f1e9d6b48aebc387d23028d586e79c7577fcbc4b2cd058fccd7f5ff93951" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.489173 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-72ps5"] Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.502495 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-72ps5"] Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.507555 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr76d"] Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.509844 4959 scope.go:117] "RemoveContainer" containerID="47f2e3469b1551818252a1da3375c43b6962067f07526c873a436b54add87765" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.511118 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr76d"] Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.528480 4959 scope.go:117] "RemoveContainer" containerID="bab8015d20949fd0a6974720bddd0420a5e9940fe332a3a4808d78d7f2ab98d7" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.540932 4959 scope.go:117] "RemoveContainer" containerID="b58974dbaad639f2ef93db5b042cbd0b949314af49d68961de8d5f5d39841099" Jan 21 13:15:24 crc kubenswrapper[4959]: I0121 13:15:24.557494 4959 scope.go:117] "RemoveContainer" containerID="8b00b45fe596cb20e8df32d2bf64e2c3d6780af24ee41e9cde638896d4713a19" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221623 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2gp7k"] Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221822 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221832 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221846 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221852 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221861 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221867 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221880 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221886 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221893 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221899 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221907 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221912 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221919 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221926 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221935 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221941 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221949 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221955 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221965 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221971 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="extract-utilities" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221979 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221985 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.221992 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.221997 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.222005 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222010 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="extract-content" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222090 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222119 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222128 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222134 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222143 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222152 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" containerName="registry-server" Jan 21 13:15:25 crc kubenswrapper[4959]: E0121 13:15:25.222235 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222241 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" containerName="marketplace-operator" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.222860 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.227687 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.238792 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2gp7k"] Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.292606 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0220f7cc-761e-4995-aa56-6c543cd5a294" path="/var/lib/kubelet/pods/0220f7cc-761e-4995-aa56-6c543cd5a294/volumes" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.293285 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8899f354-3f43-4222-88a9-221ca1a6dc6e" path="/var/lib/kubelet/pods/8899f354-3f43-4222-88a9-221ca1a6dc6e/volumes" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.293830 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c01bf13b-8ada-46de-a969-cb5691c8d1c0" path="/var/lib/kubelet/pods/c01bf13b-8ada-46de-a969-cb5691c8d1c0/volumes" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.294839 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3b719b-be6f-4a11-a13c-ba1bfca953a7" path="/var/lib/kubelet/pods/ea3b719b-be6f-4a11-a13c-ba1bfca953a7/volumes" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.295473 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29bfd9b-784d-46af-a90a-47ab4f2c5dfb" path="/var/lib/kubelet/pods/f29bfd9b-784d-46af-a90a-47ab4f2c5dfb/volumes" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.338541 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-utilities\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.338607 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-catalog-content\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.338672 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cks5f\" (UniqueName: \"kubernetes.io/projected/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-kube-api-access-cks5f\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.421428 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtg48"] Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.422771 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.424947 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.431522 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtg48"] Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.439726 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-utilities\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.439779 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-catalog-content\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.439813 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cks5f\" (UniqueName: \"kubernetes.io/projected/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-kube-api-access-cks5f\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.440592 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-utilities\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.440676 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-catalog-content\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.466327 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cks5f\" (UniqueName: \"kubernetes.io/projected/a8d96f57-3e2b-4959-9205-7ccb1f90abf2-kube-api-access-cks5f\") pod \"certified-operators-2gp7k\" (UID: \"a8d96f57-3e2b-4959-9205-7ccb1f90abf2\") " pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.541008 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726bd9b3-bca1-4956-9252-8c52bf6860b4-catalog-content\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.541487 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726bd9b3-bca1-4956-9252-8c52bf6860b4-utilities\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.541594 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5ts\" (UniqueName: \"kubernetes.io/projected/726bd9b3-bca1-4956-9252-8c52bf6860b4-kube-api-access-kg5ts\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.552292 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.643165 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726bd9b3-bca1-4956-9252-8c52bf6860b4-utilities\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.643764 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg5ts\" (UniqueName: \"kubernetes.io/projected/726bd9b3-bca1-4956-9252-8c52bf6860b4-kube-api-access-kg5ts\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.643840 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726bd9b3-bca1-4956-9252-8c52bf6860b4-catalog-content\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.644401 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726bd9b3-bca1-4956-9252-8c52bf6860b4-utilities\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.646625 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726bd9b3-bca1-4956-9252-8c52bf6860b4-catalog-content\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.664646 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg5ts\" (UniqueName: \"kubernetes.io/projected/726bd9b3-bca1-4956-9252-8c52bf6860b4-kube-api-access-kg5ts\") pod \"community-operators-gtg48\" (UID: \"726bd9b3-bca1-4956-9252-8c52bf6860b4\") " pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.740429 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:25 crc kubenswrapper[4959]: I0121 13:15:25.939231 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2gp7k"] Jan 21 13:15:25 crc kubenswrapper[4959]: W0121 13:15:25.945684 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d96f57_3e2b_4959_9205_7ccb1f90abf2.slice/crio-d2f20e100b86d96d7d087069b08a109027c4cc425b54c08be269877c94240f43 WatchSource:0}: Error finding container d2f20e100b86d96d7d087069b08a109027c4cc425b54c08be269877c94240f43: Status 404 returned error can't find the container with id d2f20e100b86d96d7d087069b08a109027c4cc425b54c08be269877c94240f43 Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.122786 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtg48"] Jan 21 13:15:26 crc kubenswrapper[4959]: W0121 13:15:26.147039 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726bd9b3_bca1_4956_9252_8c52bf6860b4.slice/crio-5d2cb6d42a901d236c877cbbe1379d9ae8c5326e73a7e727c73cc47b6d6b11b8 WatchSource:0}: Error finding container 5d2cb6d42a901d236c877cbbe1379d9ae8c5326e73a7e727c73cc47b6d6b11b8: Status 404 returned error can't find the container with id 5d2cb6d42a901d236c877cbbe1379d9ae8c5326e73a7e727c73cc47b6d6b11b8 Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.450440 4959 generic.go:334] "Generic (PLEG): container finished" podID="a8d96f57-3e2b-4959-9205-7ccb1f90abf2" containerID="1606cc6570463228d7d14ad5dbfceca18c81861f899f9dda61e3452e54d46dcf" exitCode=0 Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.450512 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gp7k" event={"ID":"a8d96f57-3e2b-4959-9205-7ccb1f90abf2","Type":"ContainerDied","Data":"1606cc6570463228d7d14ad5dbfceca18c81861f899f9dda61e3452e54d46dcf"} Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.450548 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gp7k" event={"ID":"a8d96f57-3e2b-4959-9205-7ccb1f90abf2","Type":"ContainerStarted","Data":"d2f20e100b86d96d7d087069b08a109027c4cc425b54c08be269877c94240f43"} Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.454512 4959 generic.go:334] "Generic (PLEG): container finished" podID="726bd9b3-bca1-4956-9252-8c52bf6860b4" containerID="b4dc650f86954c1e0a192b460ad22426236a0f7588ca31dd8068b0ea45234695" exitCode=0 Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.454549 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtg48" event={"ID":"726bd9b3-bca1-4956-9252-8c52bf6860b4","Type":"ContainerDied","Data":"b4dc650f86954c1e0a192b460ad22426236a0f7588ca31dd8068b0ea45234695"} Jan 21 13:15:26 crc kubenswrapper[4959]: I0121 13:15:26.454580 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtg48" event={"ID":"726bd9b3-bca1-4956-9252-8c52bf6860b4","Type":"ContainerStarted","Data":"5d2cb6d42a901d236c877cbbe1379d9ae8c5326e73a7e727c73cc47b6d6b11b8"} Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.467370 4959 generic.go:334] "Generic (PLEG): container finished" podID="726bd9b3-bca1-4956-9252-8c52bf6860b4" containerID="be91b201ce468affb85bd3872548e3d029e935521954d37340a3f8f4edef96e1" exitCode=0 Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.467428 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtg48" event={"ID":"726bd9b3-bca1-4956-9252-8c52bf6860b4","Type":"ContainerDied","Data":"be91b201ce468affb85bd3872548e3d029e935521954d37340a3f8f4edef96e1"} Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.470302 4959 generic.go:334] "Generic (PLEG): container finished" podID="a8d96f57-3e2b-4959-9205-7ccb1f90abf2" containerID="2db4095748d79d8d5baf28ce499172ba2f047da06aeb2a8c6d8ece440c356597" exitCode=0 Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.470336 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gp7k" event={"ID":"a8d96f57-3e2b-4959-9205-7ccb1f90abf2","Type":"ContainerDied","Data":"2db4095748d79d8d5baf28ce499172ba2f047da06aeb2a8c6d8ece440c356597"} Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.619581 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5x8p"] Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.620770 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.622467 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.630698 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5x8p"] Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.771925 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d13b31b-9111-4e07-83c4-c55c579cb41f-catalog-content\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.772295 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d13b31b-9111-4e07-83c4-c55c579cb41f-utilities\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.772413 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvkf\" (UniqueName: \"kubernetes.io/projected/3d13b31b-9111-4e07-83c4-c55c579cb41f-kube-api-access-nbvkf\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.824277 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xrqzp"] Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.825626 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.829303 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.832050 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xrqzp"] Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.873109 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d13b31b-9111-4e07-83c4-c55c579cb41f-catalog-content\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.873388 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d13b31b-9111-4e07-83c4-c55c579cb41f-utilities\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.873439 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvkf\" (UniqueName: \"kubernetes.io/projected/3d13b31b-9111-4e07-83c4-c55c579cb41f-kube-api-access-nbvkf\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.873544 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d13b31b-9111-4e07-83c4-c55c579cb41f-catalog-content\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.874641 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d13b31b-9111-4e07-83c4-c55c579cb41f-utilities\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.892398 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvkf\" (UniqueName: \"kubernetes.io/projected/3d13b31b-9111-4e07-83c4-c55c579cb41f-kube-api-access-nbvkf\") pod \"redhat-marketplace-k5x8p\" (UID: \"3d13b31b-9111-4e07-83c4-c55c579cb41f\") " pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.939210 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.974845 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc62759-ca0b-47bf-8839-c23821a9124e-catalog-content\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.974937 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2mq9\" (UniqueName: \"kubernetes.io/projected/2bc62759-ca0b-47bf-8839-c23821a9124e-kube-api-access-w2mq9\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:27 crc kubenswrapper[4959]: I0121 13:15:27.975272 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc62759-ca0b-47bf-8839-c23821a9124e-utilities\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.077206 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2mq9\" (UniqueName: \"kubernetes.io/projected/2bc62759-ca0b-47bf-8839-c23821a9124e-kube-api-access-w2mq9\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.077596 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc62759-ca0b-47bf-8839-c23821a9124e-utilities\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.077639 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc62759-ca0b-47bf-8839-c23821a9124e-catalog-content\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.078237 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc62759-ca0b-47bf-8839-c23821a9124e-catalog-content\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.078268 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc62759-ca0b-47bf-8839-c23821a9124e-utilities\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.094862 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2mq9\" (UniqueName: \"kubernetes.io/projected/2bc62759-ca0b-47bf-8839-c23821a9124e-kube-api-access-w2mq9\") pod \"redhat-operators-xrqzp\" (UID: \"2bc62759-ca0b-47bf-8839-c23821a9124e\") " pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.146340 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.337473 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5x8p"] Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.476756 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtg48" event={"ID":"726bd9b3-bca1-4956-9252-8c52bf6860b4","Type":"ContainerStarted","Data":"a747767971184909dccd7532d7b17abcda9af6cd2270fcfef4edced7eeae8ede"} Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.485033 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gp7k" event={"ID":"a8d96f57-3e2b-4959-9205-7ccb1f90abf2","Type":"ContainerStarted","Data":"592f97adb0c9275baa5687754a500791136d2a483e4db6ab7def894caf8fdf5f"} Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.491476 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5x8p" event={"ID":"3d13b31b-9111-4e07-83c4-c55c579cb41f","Type":"ContainerStarted","Data":"0fe45c17710db85ac3a90588d0dd108dd29c186396ec7db99c9ca1768492b04b"} Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.491525 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5x8p" event={"ID":"3d13b31b-9111-4e07-83c4-c55c579cb41f","Type":"ContainerStarted","Data":"ecd3ac1cc72eb53d62c87150c304189332a54ebd306553419be4d8a2ec0dd73e"} Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.494268 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtg48" podStartSLOduration=2.064005833 podStartE2EDuration="3.494254975s" podCreationTimestamp="2026-01-21 13:15:25 +0000 UTC" firstStartedPulling="2026-01-21 13:15:26.456248459 +0000 UTC m=+387.419279002" lastFinishedPulling="2026-01-21 13:15:27.886497601 +0000 UTC m=+388.849528144" observedRunningTime="2026-01-21 13:15:28.4936947 +0000 UTC m=+389.456725243" watchObservedRunningTime="2026-01-21 13:15:28.494254975 +0000 UTC m=+389.457285528" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.530713 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2gp7k" podStartSLOduration=2.1215459230000002 podStartE2EDuration="3.530697691s" podCreationTimestamp="2026-01-21 13:15:25 +0000 UTC" firstStartedPulling="2026-01-21 13:15:26.451894015 +0000 UTC m=+387.414924558" lastFinishedPulling="2026-01-21 13:15:27.861045793 +0000 UTC m=+388.824076326" observedRunningTime="2026-01-21 13:15:28.526131171 +0000 UTC m=+389.489161734" watchObservedRunningTime="2026-01-21 13:15:28.530697691 +0000 UTC m=+389.493728234" Jan 21 13:15:28 crc kubenswrapper[4959]: I0121 13:15:28.572390 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xrqzp"] Jan 21 13:15:28 crc kubenswrapper[4959]: W0121 13:15:28.579235 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bc62759_ca0b_47bf_8839_c23821a9124e.slice/crio-43c2d4cde11b491ced612aca2c781cc5470e6e1db3e6b597ac1619434cf5ccf3 WatchSource:0}: Error finding container 43c2d4cde11b491ced612aca2c781cc5470e6e1db3e6b597ac1619434cf5ccf3: Status 404 returned error can't find the container with id 43c2d4cde11b491ced612aca2c781cc5470e6e1db3e6b597ac1619434cf5ccf3 Jan 21 13:15:29 crc kubenswrapper[4959]: I0121 13:15:29.497459 4959 generic.go:334] "Generic (PLEG): container finished" podID="3d13b31b-9111-4e07-83c4-c55c579cb41f" containerID="0fe45c17710db85ac3a90588d0dd108dd29c186396ec7db99c9ca1768492b04b" exitCode=0 Jan 21 13:15:29 crc kubenswrapper[4959]: I0121 13:15:29.497596 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5x8p" event={"ID":"3d13b31b-9111-4e07-83c4-c55c579cb41f","Type":"ContainerDied","Data":"0fe45c17710db85ac3a90588d0dd108dd29c186396ec7db99c9ca1768492b04b"} Jan 21 13:15:29 crc kubenswrapper[4959]: I0121 13:15:29.498998 4959 generic.go:334] "Generic (PLEG): container finished" podID="2bc62759-ca0b-47bf-8839-c23821a9124e" containerID="2a1bfc3c0492d52bba155b44e478ef7de96fb07952e43b23d88106e3ba485461" exitCode=0 Jan 21 13:15:29 crc kubenswrapper[4959]: I0121 13:15:29.499045 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrqzp" event={"ID":"2bc62759-ca0b-47bf-8839-c23821a9124e","Type":"ContainerDied","Data":"2a1bfc3c0492d52bba155b44e478ef7de96fb07952e43b23d88106e3ba485461"} Jan 21 13:15:29 crc kubenswrapper[4959]: I0121 13:15:29.499108 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrqzp" event={"ID":"2bc62759-ca0b-47bf-8839-c23821a9124e","Type":"ContainerStarted","Data":"43c2d4cde11b491ced612aca2c781cc5470e6e1db3e6b597ac1619434cf5ccf3"} Jan 21 13:15:30 crc kubenswrapper[4959]: I0121 13:15:30.509476 4959 generic.go:334] "Generic (PLEG): container finished" podID="3d13b31b-9111-4e07-83c4-c55c579cb41f" containerID="5e50e6a9813ac4797cb1154b58beded3840fe093f774ba5fc2979fd15ec6e5e5" exitCode=0 Jan 21 13:15:30 crc kubenswrapper[4959]: I0121 13:15:30.509762 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5x8p" event={"ID":"3d13b31b-9111-4e07-83c4-c55c579cb41f","Type":"ContainerDied","Data":"5e50e6a9813ac4797cb1154b58beded3840fe093f774ba5fc2979fd15ec6e5e5"} Jan 21 13:15:31 crc kubenswrapper[4959]: I0121 13:15:31.517463 4959 generic.go:334] "Generic (PLEG): container finished" podID="2bc62759-ca0b-47bf-8839-c23821a9124e" containerID="56e579e6c2c3583515979d342eb8057d514262e303c1221915fb886b73878118" exitCode=0 Jan 21 13:15:31 crc kubenswrapper[4959]: I0121 13:15:31.517555 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrqzp" event={"ID":"2bc62759-ca0b-47bf-8839-c23821a9124e","Type":"ContainerDied","Data":"56e579e6c2c3583515979d342eb8057d514262e303c1221915fb886b73878118"} Jan 21 13:15:34 crc kubenswrapper[4959]: I0121 13:15:34.533785 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5x8p" event={"ID":"3d13b31b-9111-4e07-83c4-c55c579cb41f","Type":"ContainerStarted","Data":"3622d2db2c4fbbaf681286d8074fa9a986c4ba86f26ba0fc23258564f96395cb"} Jan 21 13:15:34 crc kubenswrapper[4959]: I0121 13:15:34.536076 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xrqzp" event={"ID":"2bc62759-ca0b-47bf-8839-c23821a9124e","Type":"ContainerStarted","Data":"fb6d11837bd4fe0c7bf4d9f22ca1733709c0f47c720fb42a2b391214650cf6f4"} Jan 21 13:15:34 crc kubenswrapper[4959]: I0121 13:15:34.576654 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5x8p" podStartSLOduration=3.878514668 podStartE2EDuration="7.576634355s" podCreationTimestamp="2026-01-21 13:15:27 +0000 UTC" firstStartedPulling="2026-01-21 13:15:28.493150216 +0000 UTC m=+389.456180759" lastFinishedPulling="2026-01-21 13:15:32.191269903 +0000 UTC m=+393.154300446" observedRunningTime="2026-01-21 13:15:34.55244915 +0000 UTC m=+395.515479693" watchObservedRunningTime="2026-01-21 13:15:34.576634355 +0000 UTC m=+395.539664898" Jan 21 13:15:34 crc kubenswrapper[4959]: I0121 13:15:34.577916 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xrqzp" podStartSLOduration=4.525077859 podStartE2EDuration="7.577906569s" podCreationTimestamp="2026-01-21 13:15:27 +0000 UTC" firstStartedPulling="2026-01-21 13:15:29.501485268 +0000 UTC m=+390.464515811" lastFinishedPulling="2026-01-21 13:15:32.554313978 +0000 UTC m=+393.517344521" observedRunningTime="2026-01-21 13:15:34.573755488 +0000 UTC m=+395.536786051" watchObservedRunningTime="2026-01-21 13:15:34.577906569 +0000 UTC m=+395.540937112" Jan 21 13:15:35 crc kubenswrapper[4959]: I0121 13:15:35.552555 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:35 crc kubenswrapper[4959]: I0121 13:15:35.552933 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:35 crc kubenswrapper[4959]: I0121 13:15:35.598501 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:35 crc kubenswrapper[4959]: I0121 13:15:35.741945 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:35 crc kubenswrapper[4959]: I0121 13:15:35.742044 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:35 crc kubenswrapper[4959]: I0121 13:15:35.779465 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:36 crc kubenswrapper[4959]: I0121 13:15:36.584011 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtg48" Jan 21 13:15:36 crc kubenswrapper[4959]: I0121 13:15:36.588020 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2gp7k" Jan 21 13:15:37 crc kubenswrapper[4959]: I0121 13:15:37.939946 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:37 crc kubenswrapper[4959]: I0121 13:15:37.940321 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:37 crc kubenswrapper[4959]: I0121 13:15:37.986187 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:38 crc kubenswrapper[4959]: I0121 13:15:38.146480 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:38 crc kubenswrapper[4959]: I0121 13:15:38.146561 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:38 crc kubenswrapper[4959]: I0121 13:15:38.198805 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" podUID="4d74ebb8-a165-44d5-a5cf-17217e03be90" containerName="registry" containerID="cri-o://9f0ffc6e8564a2c38afb4b5ca60ba7055ad367337a5b758b80c48a5f4dfebd74" gracePeriod=30 Jan 21 13:15:39 crc kubenswrapper[4959]: I0121 13:15:39.201865 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xrqzp" podUID="2bc62759-ca0b-47bf-8839-c23821a9124e" containerName="registry-server" probeResult="failure" output=< Jan 21 13:15:39 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:15:39 crc kubenswrapper[4959]: > Jan 21 13:15:40 crc kubenswrapper[4959]: I0121 13:15:40.569150 4959 generic.go:334] "Generic (PLEG): container finished" podID="4d74ebb8-a165-44d5-a5cf-17217e03be90" containerID="9f0ffc6e8564a2c38afb4b5ca60ba7055ad367337a5b758b80c48a5f4dfebd74" exitCode=0 Jan 21 13:15:40 crc kubenswrapper[4959]: I0121 13:15:40.569232 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" event={"ID":"4d74ebb8-a165-44d5-a5cf-17217e03be90","Type":"ContainerDied","Data":"9f0ffc6e8564a2c38afb4b5ca60ba7055ad367337a5b758b80c48a5f4dfebd74"} Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.906808 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.953845 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-certificates\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.953916 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jgqd\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-kube-api-access-5jgqd\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.953968 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d74ebb8-a165-44d5-a5cf-17217e03be90-installation-pull-secrets\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.953986 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d74ebb8-a165-44d5-a5cf-17217e03be90-ca-trust-extracted\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.954032 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-trusted-ca\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.954168 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.954198 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-bound-sa-token\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.954228 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-tls\") pod \"4d74ebb8-a165-44d5-a5cf-17217e03be90\" (UID: \"4d74ebb8-a165-44d5-a5cf-17217e03be90\") " Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.954935 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.955802 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.966454 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.967805 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-kube-api-access-5jgqd" (OuterVolumeSpecName: "kube-api-access-5jgqd") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "kube-api-access-5jgqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.968262 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d74ebb8-a165-44d5-a5cf-17217e03be90-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.970556 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.973501 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d74ebb8-a165-44d5-a5cf-17217e03be90-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:40.978503 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4d74ebb8-a165-44d5-a5cf-17217e03be90" (UID: "4d74ebb8-a165-44d5-a5cf-17217e03be90"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056315 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056371 4959 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056386 4959 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056398 4959 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d74ebb8-a165-44d5-a5cf-17217e03be90-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056414 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jgqd\" (UniqueName: \"kubernetes.io/projected/4d74ebb8-a165-44d5-a5cf-17217e03be90-kube-api-access-5jgqd\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056426 4959 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d74ebb8-a165-44d5-a5cf-17217e03be90-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.056440 4959 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d74ebb8-a165-44d5-a5cf-17217e03be90-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.576977 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" event={"ID":"4d74ebb8-a165-44d5-a5cf-17217e03be90","Type":"ContainerDied","Data":"0623eea71e7e19b6ecacad0541366b68c4cd41f180244e47b45fd84625d48515"} Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.577021 4959 scope.go:117] "RemoveContainer" containerID="9f0ffc6e8564a2c38afb4b5ca60ba7055ad367337a5b758b80c48a5f4dfebd74" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.577155 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xlvm4" Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.600183 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xlvm4"] Jan 21 13:15:42 crc kubenswrapper[4959]: I0121 13:15:41.606684 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xlvm4"] Jan 21 13:15:43 crc kubenswrapper[4959]: I0121 13:15:43.298074 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d74ebb8-a165-44d5-a5cf-17217e03be90" path="/var/lib/kubelet/pods/4d74ebb8-a165-44d5-a5cf-17217e03be90/volumes" Jan 21 13:15:47 crc kubenswrapper[4959]: I0121 13:15:47.977894 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5x8p" Jan 21 13:15:48 crc kubenswrapper[4959]: I0121 13:15:48.182552 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:15:48 crc kubenswrapper[4959]: I0121 13:15:48.235870 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xrqzp" Jan 21 13:17:51 crc kubenswrapper[4959]: I0121 13:17:51.380535 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:17:51 crc kubenswrapper[4959]: I0121 13:17:51.381536 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:18:21 crc kubenswrapper[4959]: I0121 13:18:21.379776 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:18:21 crc kubenswrapper[4959]: I0121 13:18:21.380556 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.380149 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.380729 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.380806 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.381600 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aad89835c8a01e37a654d9249967f8eba913bfa1b726fe57032b28a559caff14"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.381695 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://aad89835c8a01e37a654d9249967f8eba913bfa1b726fe57032b28a559caff14" gracePeriod=600 Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.614982 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="aad89835c8a01e37a654d9249967f8eba913bfa1b726fe57032b28a559caff14" exitCode=0 Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.615062 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"aad89835c8a01e37a654d9249967f8eba913bfa1b726fe57032b28a559caff14"} Jan 21 13:18:51 crc kubenswrapper[4959]: I0121 13:18:51.615537 4959 scope.go:117] "RemoveContainer" containerID="13642c440229a641b98715e5b86f12964d19facd8a63f7a2ba469a1067d57fdf" Jan 21 13:18:52 crc kubenswrapper[4959]: I0121 13:18:52.623528 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"b802840efc0a2a43f88d6b69a868dc35f4fb5bac7bce20e288e99506f21a88de"} Jan 21 13:20:51 crc kubenswrapper[4959]: I0121 13:20:51.380072 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:20:51 crc kubenswrapper[4959]: I0121 13:20:51.380764 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.097693 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm"] Jan 21 13:21:01 crc kubenswrapper[4959]: E0121 13:21:01.098486 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d74ebb8-a165-44d5-a5cf-17217e03be90" containerName="registry" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.098500 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d74ebb8-a165-44d5-a5cf-17217e03be90" containerName="registry" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.098602 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d74ebb8-a165-44d5-a5cf-17217e03be90" containerName="registry" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.098992 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.106873 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.106960 4959 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jw6h5" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.107078 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.119382 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7jsww"] Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.120155 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7jsww" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.123241 4959 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-84r8f" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.139754 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm"] Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.148590 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4lb8q"] Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.152464 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.154621 4959 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zgfjr" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.155898 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7jsww"] Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.171997 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4lb8q"] Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.248073 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2569p\" (UniqueName: \"kubernetes.io/projected/ab2d47a6-0c67-4286-bf53-c32a798cccb6-kube-api-access-2569p\") pod \"cert-manager-cainjector-cf98fcc89-8mqxm\" (UID: \"ab2d47a6-0c67-4286-bf53-c32a798cccb6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.248243 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wntxf\" (UniqueName: \"kubernetes.io/projected/c485a4a8-e2c1-4f29-aec5-0712e70756da-kube-api-access-wntxf\") pod \"cert-manager-858654f9db-7jsww\" (UID: \"c485a4a8-e2c1-4f29-aec5-0712e70756da\") " pod="cert-manager/cert-manager-858654f9db-7jsww" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.248357 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2w6\" (UniqueName: \"kubernetes.io/projected/0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40-kube-api-access-8p2w6\") pod \"cert-manager-webhook-687f57d79b-4lb8q\" (UID: \"0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.349646 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2569p\" (UniqueName: \"kubernetes.io/projected/ab2d47a6-0c67-4286-bf53-c32a798cccb6-kube-api-access-2569p\") pod \"cert-manager-cainjector-cf98fcc89-8mqxm\" (UID: \"ab2d47a6-0c67-4286-bf53-c32a798cccb6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.349710 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wntxf\" (UniqueName: \"kubernetes.io/projected/c485a4a8-e2c1-4f29-aec5-0712e70756da-kube-api-access-wntxf\") pod \"cert-manager-858654f9db-7jsww\" (UID: \"c485a4a8-e2c1-4f29-aec5-0712e70756da\") " pod="cert-manager/cert-manager-858654f9db-7jsww" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.349762 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2w6\" (UniqueName: \"kubernetes.io/projected/0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40-kube-api-access-8p2w6\") pod \"cert-manager-webhook-687f57d79b-4lb8q\" (UID: \"0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.368248 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2569p\" (UniqueName: \"kubernetes.io/projected/ab2d47a6-0c67-4286-bf53-c32a798cccb6-kube-api-access-2569p\") pod \"cert-manager-cainjector-cf98fcc89-8mqxm\" (UID: \"ab2d47a6-0c67-4286-bf53-c32a798cccb6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.368287 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wntxf\" (UniqueName: \"kubernetes.io/projected/c485a4a8-e2c1-4f29-aec5-0712e70756da-kube-api-access-wntxf\") pod \"cert-manager-858654f9db-7jsww\" (UID: \"c485a4a8-e2c1-4f29-aec5-0712e70756da\") " pod="cert-manager/cert-manager-858654f9db-7jsww" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.373972 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2w6\" (UniqueName: \"kubernetes.io/projected/0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40-kube-api-access-8p2w6\") pod \"cert-manager-webhook-687f57d79b-4lb8q\" (UID: \"0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.418489 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.443760 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7jsww" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.477058 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.629398 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm"] Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.639735 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.670370 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7jsww"] Jan 21 13:21:01 crc kubenswrapper[4959]: W0121 13:21:01.674542 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc485a4a8_e2c1_4f29_aec5_0712e70756da.slice/crio-fe4c3f8052d8f83659a95c648dfd890b30e367eeed0e6c0298312b6aa3c66d14 WatchSource:0}: Error finding container fe4c3f8052d8f83659a95c648dfd890b30e367eeed0e6c0298312b6aa3c66d14: Status 404 returned error can't find the container with id fe4c3f8052d8f83659a95c648dfd890b30e367eeed0e6c0298312b6aa3c66d14 Jan 21 13:21:01 crc kubenswrapper[4959]: I0121 13:21:01.733010 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4lb8q"] Jan 21 13:21:02 crc kubenswrapper[4959]: I0121 13:21:02.332584 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" event={"ID":"0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40","Type":"ContainerStarted","Data":"63744fb50bd4b60480d8daea30f23773cd8356e3a34d1531f7ba660f7b8cbecf"} Jan 21 13:21:02 crc kubenswrapper[4959]: I0121 13:21:02.333523 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7jsww" event={"ID":"c485a4a8-e2c1-4f29-aec5-0712e70756da","Type":"ContainerStarted","Data":"fe4c3f8052d8f83659a95c648dfd890b30e367eeed0e6c0298312b6aa3c66d14"} Jan 21 13:21:02 crc kubenswrapper[4959]: I0121 13:21:02.334517 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" event={"ID":"ab2d47a6-0c67-4286-bf53-c32a798cccb6","Type":"ContainerStarted","Data":"4120b6fc96231315a45880580de21f57e4b1df2ac5eb2597d89e945bfa480743"} Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.363327 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7jsww" event={"ID":"c485a4a8-e2c1-4f29-aec5-0712e70756da","Type":"ContainerStarted","Data":"1d09c3d7103bac15b878e1019c2e4061e7e9b7da096ba2d58f6a7af38f21ca4b"} Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.365873 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" event={"ID":"ab2d47a6-0c67-4286-bf53-c32a798cccb6","Type":"ContainerStarted","Data":"54b362a51c91dee96e779806c4a585b246ccedc47af153d1bfeb92df0ccc6859"} Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.367284 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" event={"ID":"0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40","Type":"ContainerStarted","Data":"229bb5cf3116fbc2faf0e6bc438d79160baa1649197223069b8cdca1d97c2452"} Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.367691 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.380355 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7jsww" podStartSLOduration=2.5102557599999997 podStartE2EDuration="6.3803366s" podCreationTimestamp="2026-01-21 13:21:01 +0000 UTC" firstStartedPulling="2026-01-21 13:21:01.676842576 +0000 UTC m=+722.639873119" lastFinishedPulling="2026-01-21 13:21:05.546923416 +0000 UTC m=+726.509953959" observedRunningTime="2026-01-21 13:21:07.379566039 +0000 UTC m=+728.342596582" watchObservedRunningTime="2026-01-21 13:21:07.3803366 +0000 UTC m=+728.343367143" Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.405955 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" podStartSLOduration=1.820613409 podStartE2EDuration="6.40593509s" podCreationTimestamp="2026-01-21 13:21:01 +0000 UTC" firstStartedPulling="2026-01-21 13:21:01.739165902 +0000 UTC m=+722.702196445" lastFinishedPulling="2026-01-21 13:21:06.324487583 +0000 UTC m=+727.287518126" observedRunningTime="2026-01-21 13:21:07.401062327 +0000 UTC m=+728.364092870" watchObservedRunningTime="2026-01-21 13:21:07.40593509 +0000 UTC m=+728.368965633" Jan 21 13:21:07 crc kubenswrapper[4959]: I0121 13:21:07.436866 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8mqxm" podStartSLOduration=2.529461837 podStartE2EDuration="6.436850617s" podCreationTimestamp="2026-01-21 13:21:01 +0000 UTC" firstStartedPulling="2026-01-21 13:21:01.639533376 +0000 UTC m=+722.602563919" lastFinishedPulling="2026-01-21 13:21:05.546922156 +0000 UTC m=+726.509952699" observedRunningTime="2026-01-21 13:21:07.432789896 +0000 UTC m=+728.395820449" watchObservedRunningTime="2026-01-21 13:21:07.436850617 +0000 UTC m=+728.399881160" Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.438512 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x7k8s"] Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439156 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-controller" containerID="cri-o://e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439554 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="northd" containerID="cri-o://cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439583 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="sbdb" containerID="cri-o://b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439555 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="nbdb" containerID="cri-o://411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439604 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-node" containerID="cri-o://fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439820 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.439604 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-acl-logging" containerID="cri-o://84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" gracePeriod=30 Jan 21 13:21:09 crc kubenswrapper[4959]: I0121 13:21:09.514314 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" containerID="cri-o://3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" gracePeriod=30 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.178431 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/3.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.182036 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovn-acl-logging/0.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.182918 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovn-controller/0.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.183592 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.238565 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hp42r"] Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.238909 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.238933 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.238944 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-acl-logging" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.238953 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-acl-logging" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.238968 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="northd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.238977 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="northd" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.238990 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="sbdb" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.238997 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="sbdb" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239009 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239017 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239028 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kubecfg-setup" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239035 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kubecfg-setup" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239044 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239050 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239062 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239072 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239084 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239092 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239117 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239124 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239137 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="nbdb" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239145 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="nbdb" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239153 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239160 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.239172 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-node" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239178 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-node" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239326 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-acl-logging" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239337 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239345 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239358 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovn-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239369 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="northd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239380 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="nbdb" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239388 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="sbdb" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239397 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239405 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="kube-rbac-proxy-node" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239416 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239773 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.239787 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerName="ovnkube-controller" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.242031 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301229 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvkhb\" (UniqueName: \"kubernetes.io/projected/eea635fd-8d4a-4b77-bb58-3d778f59c79e-kube-api-access-cvkhb\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301654 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-slash\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301768 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-ovn-kubernetes\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301882 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-ovn\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301808 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-slash" (OuterVolumeSpecName: "host-slash") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301853 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301991 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.301998 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-config\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302084 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-var-lib-openvswitch\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302127 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-netd\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302134 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302144 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-systemd\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302209 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovn-node-metrics-cert\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302232 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-env-overrides\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302252 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-script-lib\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302276 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-etc-openvswitch\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302292 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302328 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-netns\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302346 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-log-socket\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302365 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-kubelet\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302401 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-bin\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302425 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-openvswitch\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302445 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-node-log\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302469 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-systemd-units\") pod \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\" (UID: \"eea635fd-8d4a-4b77-bb58-3d778f59c79e\") " Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302595 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302765 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302851 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.302974 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303048 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303282 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303348 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-log-socket" (OuterVolumeSpecName: "log-socket") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303380 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303400 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303420 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-node-log" (OuterVolumeSpecName: "node-log") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303488 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303562 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303680 4959 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303701 4959 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303711 4959 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303720 4959 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303729 4959 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303738 4959 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303748 4959 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303757 4959 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303765 4959 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303773 4959 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303780 4959 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303788 4959 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303796 4959 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303804 4959 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303812 4959 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.303820 4959 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.304634 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.308262 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.308521 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea635fd-8d4a-4b77-bb58-3d778f59c79e-kube-api-access-cvkhb" (OuterVolumeSpecName: "kube-api-access-cvkhb") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "kube-api-access-cvkhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.318351 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "eea635fd-8d4a-4b77-bb58-3d778f59c79e" (UID: "eea635fd-8d4a-4b77-bb58-3d778f59c79e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.386591 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovnkube-controller/3.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.389109 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovn-acl-logging/0.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.389592 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x7k8s_eea635fd-8d4a-4b77-bb58-3d778f59c79e/ovn-controller/0.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.389977 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" exitCode=0 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390011 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" exitCode=0 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390022 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" exitCode=0 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390030 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" exitCode=0 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390039 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" exitCode=0 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390048 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" exitCode=0 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390056 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" exitCode=143 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390064 4959 generic.go:334] "Generic (PLEG): container finished" podID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" exitCode=143 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390059 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390111 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390135 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390155 4959 scope.go:117] "RemoveContainer" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390159 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390182 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390196 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390212 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390226 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390245 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390254 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390262 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390269 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390277 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390283 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390290 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390298 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390308 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390321 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390331 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390338 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390348 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390357 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390365 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390373 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390380 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390387 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390394 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390404 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390417 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390425 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390432 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390439 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390447 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390454 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390462 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390468 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390475 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390481 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390491 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x7k8s" event={"ID":"eea635fd-8d4a-4b77-bb58-3d778f59c79e","Type":"ContainerDied","Data":"97a2140e81393fe7364cb079817a44c98b2380df395e4670f8fdbb68a8936bae"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390503 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390512 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390519 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390529 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390535 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390546 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390554 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390561 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390569 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.390576 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.392798 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/2.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.393397 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/1.log" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.393501 4959 generic.go:334] "Generic (PLEG): container finished" podID="867d68b2-3803-46b0-b974-62ec7ee89b49" containerID="a5cbffcf6d5315d0e71ca2cebcbf2ad03cf0607b739d2eb490068d3e9f9e5ed2" exitCode=2 Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.393571 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerDied","Data":"a5cbffcf6d5315d0e71ca2cebcbf2ad03cf0607b739d2eb490068d3e9f9e5ed2"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.393618 4959 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031"} Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.394289 4959 scope.go:117] "RemoveContainer" containerID="a5cbffcf6d5315d0e71ca2cebcbf2ad03cf0607b739d2eb490068d3e9f9e5ed2" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.404703 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-ovn\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405007 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-slash\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405077 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-cni-bin\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405430 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-run-ovn-kubernetes\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405567 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovn-node-metrics-cert\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405696 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-node-log\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405799 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvk6\" (UniqueName: \"kubernetes.io/projected/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-kube-api-access-rjvk6\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405870 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-env-overrides\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.405933 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-cni-netd\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406001 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-var-lib-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406073 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406170 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-systemd\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406246 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-run-netns\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406308 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406487 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-log-socket\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406931 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovnkube-config\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.406979 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovnkube-script-lib\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407042 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-systemd-units\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407159 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-etc-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407212 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-kubelet\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407332 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvkhb\" (UniqueName: \"kubernetes.io/projected/eea635fd-8d4a-4b77-bb58-3d778f59c79e-kube-api-access-cvkhb\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407358 4959 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407371 4959 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eea635fd-8d4a-4b77-bb58-3d778f59c79e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.407387 4959 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eea635fd-8d4a-4b77-bb58-3d778f59c79e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.426157 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.442535 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x7k8s"] Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.449217 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x7k8s"] Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.449876 4959 scope.go:117] "RemoveContainer" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.464324 4959 scope.go:117] "RemoveContainer" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.486541 4959 scope.go:117] "RemoveContainer" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.503922 4959 scope.go:117] "RemoveContainer" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.507775 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-var-lib-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.507855 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.507872 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-var-lib-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.507900 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-systemd\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.507971 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508016 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508052 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.507972 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-systemd\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508072 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-run-netns\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508145 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-log-socket\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508149 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-run-netns\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508195 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovnkube-config\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508212 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-log-socket\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508235 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovnkube-script-lib\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508279 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-systemd-units\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508354 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-etc-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508415 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-systemd-units\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508452 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-kubelet\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508457 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-etc-openvswitch\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508422 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-kubelet\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508513 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-ovn\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508556 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-run-ovn\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508575 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-slash\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508606 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-cni-bin\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508646 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-run-ovn-kubernetes\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508657 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-slash\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508682 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovn-node-metrics-cert\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508691 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-cni-bin\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508734 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-node-log\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508834 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvk6\" (UniqueName: \"kubernetes.io/projected/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-kube-api-access-rjvk6\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508875 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-env-overrides\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.508906 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-cni-netd\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.509008 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-cni-netd\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.509017 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovnkube-config\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.509043 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-node-log\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.509250 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovnkube-script-lib\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.509302 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-host-run-ovn-kubernetes\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.509544 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-env-overrides\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.512255 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-ovn-node-metrics-cert\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.521957 4959 scope.go:117] "RemoveContainer" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.526332 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvk6\" (UniqueName: \"kubernetes.io/projected/e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0-kube-api-access-rjvk6\") pod \"ovnkube-node-hp42r\" (UID: \"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0\") " pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.541368 4959 scope.go:117] "RemoveContainer" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.566250 4959 scope.go:117] "RemoveContainer" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.578421 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.609994 4959 scope.go:117] "RemoveContainer" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" Jan 21 13:21:10 crc kubenswrapper[4959]: W0121 13:21:10.639149 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d7573f_6bc7_4af9_8e83_8fe798cdd7e0.slice/crio-8fe749d9d71f779997b733ec6915fe6a7126982dcaaeb3f54166305e8b23357d WatchSource:0}: Error finding container 8fe749d9d71f779997b733ec6915fe6a7126982dcaaeb3f54166305e8b23357d: Status 404 returned error can't find the container with id 8fe749d9d71f779997b733ec6915fe6a7126982dcaaeb3f54166305e8b23357d Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.645617 4959 scope.go:117] "RemoveContainer" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.647828 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": container with ID starting with 3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d not found: ID does not exist" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.647862 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} err="failed to get container status \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": rpc error: code = NotFound desc = could not find container \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": container with ID starting with 3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.647883 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.648553 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": container with ID starting with 271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8 not found: ID does not exist" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.648575 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} err="failed to get container status \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": rpc error: code = NotFound desc = could not find container \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": container with ID starting with 271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.648589 4959 scope.go:117] "RemoveContainer" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.662298 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": container with ID starting with b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44 not found: ID does not exist" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.662344 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} err="failed to get container status \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": rpc error: code = NotFound desc = could not find container \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": container with ID starting with b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.662376 4959 scope.go:117] "RemoveContainer" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.667491 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": container with ID starting with 411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c not found: ID does not exist" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.667532 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} err="failed to get container status \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": rpc error: code = NotFound desc = could not find container \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": container with ID starting with 411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.667555 4959 scope.go:117] "RemoveContainer" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.667930 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": container with ID starting with cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629 not found: ID does not exist" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.667961 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} err="failed to get container status \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": rpc error: code = NotFound desc = could not find container \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": container with ID starting with cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.667975 4959 scope.go:117] "RemoveContainer" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.668859 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": container with ID starting with f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816 not found: ID does not exist" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.668909 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} err="failed to get container status \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": rpc error: code = NotFound desc = could not find container \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": container with ID starting with f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.668940 4959 scope.go:117] "RemoveContainer" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.669325 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": container with ID starting with fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3 not found: ID does not exist" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.669406 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} err="failed to get container status \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": rpc error: code = NotFound desc = could not find container \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": container with ID starting with fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.669425 4959 scope.go:117] "RemoveContainer" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.669671 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": container with ID starting with 84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd not found: ID does not exist" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.669695 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} err="failed to get container status \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": rpc error: code = NotFound desc = could not find container \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": container with ID starting with 84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.669709 4959 scope.go:117] "RemoveContainer" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.669953 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": container with ID starting with e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033 not found: ID does not exist" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670019 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} err="failed to get container status \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": rpc error: code = NotFound desc = could not find container \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": container with ID starting with e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670033 4959 scope.go:117] "RemoveContainer" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" Jan 21 13:21:10 crc kubenswrapper[4959]: E0121 13:21:10.670453 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": container with ID starting with 22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6 not found: ID does not exist" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670477 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} err="failed to get container status \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": rpc error: code = NotFound desc = could not find container \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": container with ID starting with 22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670490 4959 scope.go:117] "RemoveContainer" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670702 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} err="failed to get container status \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": rpc error: code = NotFound desc = could not find container \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": container with ID starting with 3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670730 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670956 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} err="failed to get container status \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": rpc error: code = NotFound desc = could not find container \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": container with ID starting with 271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.670977 4959 scope.go:117] "RemoveContainer" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.671246 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} err="failed to get container status \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": rpc error: code = NotFound desc = could not find container \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": container with ID starting with b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.671273 4959 scope.go:117] "RemoveContainer" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.671582 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} err="failed to get container status \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": rpc error: code = NotFound desc = could not find container \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": container with ID starting with 411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.671601 4959 scope.go:117] "RemoveContainer" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.671796 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} err="failed to get container status \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": rpc error: code = NotFound desc = could not find container \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": container with ID starting with cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.671827 4959 scope.go:117] "RemoveContainer" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.672112 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} err="failed to get container status \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": rpc error: code = NotFound desc = could not find container \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": container with ID starting with f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.672132 4959 scope.go:117] "RemoveContainer" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.672460 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} err="failed to get container status \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": rpc error: code = NotFound desc = could not find container \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": container with ID starting with fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.672480 4959 scope.go:117] "RemoveContainer" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.672686 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} err="failed to get container status \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": rpc error: code = NotFound desc = could not find container \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": container with ID starting with 84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.672713 4959 scope.go:117] "RemoveContainer" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.673756 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} err="failed to get container status \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": rpc error: code = NotFound desc = could not find container \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": container with ID starting with e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.673777 4959 scope.go:117] "RemoveContainer" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.674039 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} err="failed to get container status \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": rpc error: code = NotFound desc = could not find container \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": container with ID starting with 22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.674059 4959 scope.go:117] "RemoveContainer" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.674402 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} err="failed to get container status \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": rpc error: code = NotFound desc = could not find container \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": container with ID starting with 3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.674423 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.674867 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} err="failed to get container status \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": rpc error: code = NotFound desc = could not find container \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": container with ID starting with 271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.674885 4959 scope.go:117] "RemoveContainer" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675165 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} err="failed to get container status \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": rpc error: code = NotFound desc = could not find container \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": container with ID starting with b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675190 4959 scope.go:117] "RemoveContainer" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675430 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} err="failed to get container status \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": rpc error: code = NotFound desc = could not find container \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": container with ID starting with 411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675450 4959 scope.go:117] "RemoveContainer" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675687 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} err="failed to get container status \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": rpc error: code = NotFound desc = could not find container \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": container with ID starting with cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675705 4959 scope.go:117] "RemoveContainer" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675889 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} err="failed to get container status \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": rpc error: code = NotFound desc = could not find container \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": container with ID starting with f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.675904 4959 scope.go:117] "RemoveContainer" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676069 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} err="failed to get container status \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": rpc error: code = NotFound desc = could not find container \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": container with ID starting with fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676084 4959 scope.go:117] "RemoveContainer" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676297 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} err="failed to get container status \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": rpc error: code = NotFound desc = could not find container \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": container with ID starting with 84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676313 4959 scope.go:117] "RemoveContainer" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676487 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} err="failed to get container status \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": rpc error: code = NotFound desc = could not find container \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": container with ID starting with e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676502 4959 scope.go:117] "RemoveContainer" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676673 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} err="failed to get container status \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": rpc error: code = NotFound desc = could not find container \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": container with ID starting with 22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676690 4959 scope.go:117] "RemoveContainer" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676850 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} err="failed to get container status \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": rpc error: code = NotFound desc = could not find container \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": container with ID starting with 3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.676865 4959 scope.go:117] "RemoveContainer" containerID="271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677018 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8"} err="failed to get container status \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": rpc error: code = NotFound desc = could not find container \"271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8\": container with ID starting with 271d76cb08749b33e0de45fc6f8b03c5ad5f63f67bf6a7c1895a5e89a485b7a8 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677032 4959 scope.go:117] "RemoveContainer" containerID="b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677205 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44"} err="failed to get container status \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": rpc error: code = NotFound desc = could not find container \"b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44\": container with ID starting with b835b0db57b74763c63d2401ab5dfbd69ba2f87aa600bb99f5029b2ab3d11c44 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677220 4959 scope.go:117] "RemoveContainer" containerID="411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677440 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c"} err="failed to get container status \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": rpc error: code = NotFound desc = could not find container \"411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c\": container with ID starting with 411245eebb47d39530feac50370fbf5a55422520e814c707a5fbdfe33c14cd5c not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677454 4959 scope.go:117] "RemoveContainer" containerID="cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677633 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629"} err="failed to get container status \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": rpc error: code = NotFound desc = could not find container \"cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629\": container with ID starting with cce8846a2a33ce815d025405284236c6db52e9699b3d6f439a0d6bec642be629 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677648 4959 scope.go:117] "RemoveContainer" containerID="f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677815 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816"} err="failed to get container status \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": rpc error: code = NotFound desc = could not find container \"f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816\": container with ID starting with f6af8d587904dabe7b788e20e16a2496b079f21341db63a6bb7d80a1fe383816 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.677831 4959 scope.go:117] "RemoveContainer" containerID="fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678021 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3"} err="failed to get container status \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": rpc error: code = NotFound desc = could not find container \"fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3\": container with ID starting with fd4d692897eb1d7c5cb78affce4a1cbb65288b12f1759f7f7d75fe4c94c3d4f3 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678038 4959 scope.go:117] "RemoveContainer" containerID="84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678262 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd"} err="failed to get container status \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": rpc error: code = NotFound desc = could not find container \"84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd\": container with ID starting with 84f3b8c6f7fd2ddfe4e00e1569c7d54ec4194c015f51224612f9c4d4ebeaa4bd not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678277 4959 scope.go:117] "RemoveContainer" containerID="e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678467 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033"} err="failed to get container status \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": rpc error: code = NotFound desc = could not find container \"e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033\": container with ID starting with e5a0c77e590a0cd3ec7896b3e5d5e2af4b87ba17de736404a84c057446b23033 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678483 4959 scope.go:117] "RemoveContainer" containerID="22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678697 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6"} err="failed to get container status \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": rpc error: code = NotFound desc = could not find container \"22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6\": container with ID starting with 22187943de9e079e8407cded530c14464b0943f62b9e59168b4db5abf1b272d6 not found: ID does not exist" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678713 4959 scope.go:117] "RemoveContainer" containerID="3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d" Jan 21 13:21:10 crc kubenswrapper[4959]: I0121 13:21:10.678904 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d"} err="failed to get container status \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": rpc error: code = NotFound desc = could not find container \"3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d\": container with ID starting with 3cb023f364c71aae5c0231be8b94635562c5e4d68c3777ebb8e32a30746a9b8d not found: ID does not exist" Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.293729 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea635fd-8d4a-4b77-bb58-3d778f59c79e" path="/var/lib/kubelet/pods/eea635fd-8d4a-4b77-bb58-3d778f59c79e/volumes" Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.412720 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/2.log" Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.413457 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/1.log" Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.413593 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5zw9" event={"ID":"867d68b2-3803-46b0-b974-62ec7ee89b49","Type":"ContainerStarted","Data":"702d9b7d1879acff62def228d4a8db4c003d9c66d4fb739ba36d8e4a54c86a30"} Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.417550 4959 generic.go:334] "Generic (PLEG): container finished" podID="e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0" containerID="25484abf827cb7ab596ce1f843fd005fe580a0517d5f2947fd620bac6a2e2f9f" exitCode=0 Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.417616 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerDied","Data":"25484abf827cb7ab596ce1f843fd005fe580a0517d5f2947fd620bac6a2e2f9f"} Jan 21 13:21:11 crc kubenswrapper[4959]: I0121 13:21:11.417672 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"8fe749d9d71f779997b733ec6915fe6a7126982dcaaeb3f54166305e8b23357d"} Jan 21 13:21:12 crc kubenswrapper[4959]: I0121 13:21:12.428034 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"0325c4848778babc472c58979d17badcba27008ed374536ad8251e7f72ee2e27"} Jan 21 13:21:12 crc kubenswrapper[4959]: I0121 13:21:12.428395 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"ed2671a6bab59a651b084e1da990f49f5ce50c43b1f347eafd34a60136b662fe"} Jan 21 13:21:12 crc kubenswrapper[4959]: I0121 13:21:12.428412 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"db54d7c06cd4ee9cd3864a29dbef398612421ac051ef1d67054454bf51d3b071"} Jan 21 13:21:12 crc kubenswrapper[4959]: I0121 13:21:12.428424 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"5b33590830fb2b55f721b21c7e9f940afbd00c4e8e8f38a3cf4fec5ee2411145"} Jan 21 13:21:13 crc kubenswrapper[4959]: I0121 13:21:13.437580 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"6cb48c46549a54baafa259b537405479f0ff0e4e5acdbd5c440284e47942cf9a"} Jan 21 13:21:13 crc kubenswrapper[4959]: I0121 13:21:13.437933 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"4f83e16aff10c9a9a99af0e04652ef0849bfc06617e5ce15b3e83c819a979331"} Jan 21 13:21:15 crc kubenswrapper[4959]: I0121 13:21:15.450374 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"80725786fadbf5941541992b03eac6e3772b189f67df2a01fc83d0e3585f7b74"} Jan 21 13:21:16 crc kubenswrapper[4959]: I0121 13:21:16.480182 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4lb8q" Jan 21 13:21:19 crc kubenswrapper[4959]: I0121 13:21:19.470841 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" event={"ID":"e8d7573f-6bc7-4af9-8e83-8fe798cdd7e0","Type":"ContainerStarted","Data":"d1c80ce5e26948a66b07b78c9ddbb5526ec0422d41079f026280e0e6c534851a"} Jan 21 13:21:19 crc kubenswrapper[4959]: I0121 13:21:19.471436 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:19 crc kubenswrapper[4959]: I0121 13:21:19.471453 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:19 crc kubenswrapper[4959]: I0121 13:21:19.500136 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" podStartSLOduration=9.500114427 podStartE2EDuration="9.500114427s" podCreationTimestamp="2026-01-21 13:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:19.497955108 +0000 UTC m=+740.460985681" watchObservedRunningTime="2026-01-21 13:21:19.500114427 +0000 UTC m=+740.463144970" Jan 21 13:21:19 crc kubenswrapper[4959]: I0121 13:21:19.547398 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:20 crc kubenswrapper[4959]: I0121 13:21:20.479293 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:20 crc kubenswrapper[4959]: I0121 13:21:20.509488 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:21 crc kubenswrapper[4959]: I0121 13:21:21.379981 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:21:21 crc kubenswrapper[4959]: I0121 13:21:21.380046 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:21:29 crc kubenswrapper[4959]: I0121 13:21:29.459394 4959 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 13:21:40 crc kubenswrapper[4959]: I0121 13:21:40.599877 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hp42r" Jan 21 13:21:51 crc kubenswrapper[4959]: I0121 13:21:51.379901 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:21:51 crc kubenswrapper[4959]: I0121 13:21:51.380864 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:21:51 crc kubenswrapper[4959]: I0121 13:21:51.380927 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:21:51 crc kubenswrapper[4959]: I0121 13:21:51.381888 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b802840efc0a2a43f88d6b69a868dc35f4fb5bac7bce20e288e99506f21a88de"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:21:51 crc kubenswrapper[4959]: I0121 13:21:51.381946 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://b802840efc0a2a43f88d6b69a868dc35f4fb5bac7bce20e288e99506f21a88de" gracePeriod=600 Jan 21 13:21:52 crc kubenswrapper[4959]: I0121 13:21:52.166650 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="b802840efc0a2a43f88d6b69a868dc35f4fb5bac7bce20e288e99506f21a88de" exitCode=0 Jan 21 13:21:52 crc kubenswrapper[4959]: I0121 13:21:52.166762 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"b802840efc0a2a43f88d6b69a868dc35f4fb5bac7bce20e288e99506f21a88de"} Jan 21 13:21:52 crc kubenswrapper[4959]: I0121 13:21:52.167043 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"843c9a535cea21503639885bda8c5e42d1482db615844b1ac00c900cdaba0bca"} Jan 21 13:21:52 crc kubenswrapper[4959]: I0121 13:21:52.167078 4959 scope.go:117] "RemoveContainer" containerID="aad89835c8a01e37a654d9249967f8eba913bfa1b726fe57032b28a559caff14" Jan 21 13:21:59 crc kubenswrapper[4959]: I0121 13:21:59.590814 4959 scope.go:117] "RemoveContainer" containerID="4ad17d0b4efeb4694e4ce1ca92ab707376aab2b45fef78fe779ca91549cfc031" Jan 21 13:22:01 crc kubenswrapper[4959]: I0121 13:22:01.224321 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/2.log" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.119165 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c"] Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.121169 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.123677 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.129270 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c"] Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.321620 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.321723 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z42xq\" (UniqueName: \"kubernetes.io/projected/3d19cedc-8035-48c5-9702-3670bcf397dc-kube-api-access-z42xq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.321764 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.423054 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z42xq\" (UniqueName: \"kubernetes.io/projected/3d19cedc-8035-48c5-9702-3670bcf397dc-kube-api-access-z42xq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.423115 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.423193 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.423608 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.423825 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.445145 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z42xq\" (UniqueName: \"kubernetes.io/projected/3d19cedc-8035-48c5-9702-3670bcf397dc-kube-api-access-z42xq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.737565 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:05 crc kubenswrapper[4959]: I0121 13:22:05.921112 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c"] Jan 21 13:22:06 crc kubenswrapper[4959]: I0121 13:22:06.254025 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" event={"ID":"3d19cedc-8035-48c5-9702-3670bcf397dc","Type":"ContainerStarted","Data":"191c173d66483208a2bf63178a6e9126991c00a85ec44d55e77b9e4b8b1c9dbc"} Jan 21 13:22:06 crc kubenswrapper[4959]: I0121 13:22:06.254390 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" event={"ID":"3d19cedc-8035-48c5-9702-3670bcf397dc","Type":"ContainerStarted","Data":"a27b707c782aaf4f07c2c9af20296351bf4a58f815c76d859613929cc6ad9a13"} Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.259430 4959 generic.go:334] "Generic (PLEG): container finished" podID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerID="191c173d66483208a2bf63178a6e9126991c00a85ec44d55e77b9e4b8b1c9dbc" exitCode=0 Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.259478 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" event={"ID":"3d19cedc-8035-48c5-9702-3670bcf397dc","Type":"ContainerDied","Data":"191c173d66483208a2bf63178a6e9126991c00a85ec44d55e77b9e4b8b1c9dbc"} Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.481530 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c9wjh"] Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.483784 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.496147 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9wjh"] Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.650441 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-catalog-content\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.650493 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxmj\" (UniqueName: \"kubernetes.io/projected/09f7629c-5924-4bdb-be94-335fb25a5149-kube-api-access-fbxmj\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.650661 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-utilities\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.751778 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-catalog-content\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.751830 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxmj\" (UniqueName: \"kubernetes.io/projected/09f7629c-5924-4bdb-be94-335fb25a5149-kube-api-access-fbxmj\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.751862 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-utilities\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.752334 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-utilities\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.752479 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-catalog-content\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.772087 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxmj\" (UniqueName: \"kubernetes.io/projected/09f7629c-5924-4bdb-be94-335fb25a5149-kube-api-access-fbxmj\") pod \"redhat-operators-c9wjh\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:07 crc kubenswrapper[4959]: I0121 13:22:07.806507 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:08 crc kubenswrapper[4959]: I0121 13:22:08.019743 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9wjh"] Jan 21 13:22:08 crc kubenswrapper[4959]: W0121 13:22:08.024160 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f7629c_5924_4bdb_be94_335fb25a5149.slice/crio-ed6cc9660a1753facf532b7a612085cb93b8afa702f0b256c0aed6a6c0e2e845 WatchSource:0}: Error finding container ed6cc9660a1753facf532b7a612085cb93b8afa702f0b256c0aed6a6c0e2e845: Status 404 returned error can't find the container with id ed6cc9660a1753facf532b7a612085cb93b8afa702f0b256c0aed6a6c0e2e845 Jan 21 13:22:08 crc kubenswrapper[4959]: I0121 13:22:08.264370 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerStarted","Data":"ed6cc9660a1753facf532b7a612085cb93b8afa702f0b256c0aed6a6c0e2e845"} Jan 21 13:22:09 crc kubenswrapper[4959]: I0121 13:22:09.272048 4959 generic.go:334] "Generic (PLEG): container finished" podID="09f7629c-5924-4bdb-be94-335fb25a5149" containerID="e028b55dfdde852f6d74e68d575ada7043dc7a689d33911df923c6023fba5bb2" exitCode=0 Jan 21 13:22:09 crc kubenswrapper[4959]: I0121 13:22:09.272353 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerDied","Data":"e028b55dfdde852f6d74e68d575ada7043dc7a689d33911df923c6023fba5bb2"} Jan 21 13:22:09 crc kubenswrapper[4959]: I0121 13:22:09.274864 4959 generic.go:334] "Generic (PLEG): container finished" podID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerID="bdd912f77e4a7fed9d7229bb8ea8c4e0164527bfbd2e4201502cf4f039011523" exitCode=0 Jan 21 13:22:09 crc kubenswrapper[4959]: I0121 13:22:09.274918 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" event={"ID":"3d19cedc-8035-48c5-9702-3670bcf397dc","Type":"ContainerDied","Data":"bdd912f77e4a7fed9d7229bb8ea8c4e0164527bfbd2e4201502cf4f039011523"} Jan 21 13:22:10 crc kubenswrapper[4959]: I0121 13:22:10.283062 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerStarted","Data":"becc5c9f1f5fa9c6f430149c7bf91baa7b432b181386d2e35541b7e06638e5b3"} Jan 21 13:22:10 crc kubenswrapper[4959]: I0121 13:22:10.286482 4959 generic.go:334] "Generic (PLEG): container finished" podID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerID="c14a380ff933c32a9a57afdc4e1ebff5cc1b1635c657129d6d7fe2cb9c8a1f13" exitCode=0 Jan 21 13:22:10 crc kubenswrapper[4959]: I0121 13:22:10.286548 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" event={"ID":"3d19cedc-8035-48c5-9702-3670bcf397dc","Type":"ContainerDied","Data":"c14a380ff933c32a9a57afdc4e1ebff5cc1b1635c657129d6d7fe2cb9c8a1f13"} Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.533066 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.701524 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-bundle\") pod \"3d19cedc-8035-48c5-9702-3670bcf397dc\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.701587 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z42xq\" (UniqueName: \"kubernetes.io/projected/3d19cedc-8035-48c5-9702-3670bcf397dc-kube-api-access-z42xq\") pod \"3d19cedc-8035-48c5-9702-3670bcf397dc\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.702173 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-bundle" (OuterVolumeSpecName: "bundle") pod "3d19cedc-8035-48c5-9702-3670bcf397dc" (UID: "3d19cedc-8035-48c5-9702-3670bcf397dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.702504 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-util\") pod \"3d19cedc-8035-48c5-9702-3670bcf397dc\" (UID: \"3d19cedc-8035-48c5-9702-3670bcf397dc\") " Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.702768 4959 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.711746 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-util" (OuterVolumeSpecName: "util") pod "3d19cedc-8035-48c5-9702-3670bcf397dc" (UID: "3d19cedc-8035-48c5-9702-3670bcf397dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.712440 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d19cedc-8035-48c5-9702-3670bcf397dc-kube-api-access-z42xq" (OuterVolumeSpecName: "kube-api-access-z42xq") pod "3d19cedc-8035-48c5-9702-3670bcf397dc" (UID: "3d19cedc-8035-48c5-9702-3670bcf397dc"). InnerVolumeSpecName "kube-api-access-z42xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.803790 4959 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d19cedc-8035-48c5-9702-3670bcf397dc-util\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:11 crc kubenswrapper[4959]: I0121 13:22:11.803835 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z42xq\" (UniqueName: \"kubernetes.io/projected/3d19cedc-8035-48c5-9702-3670bcf397dc-kube-api-access-z42xq\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:12 crc kubenswrapper[4959]: I0121 13:22:12.298130 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" Jan 21 13:22:12 crc kubenswrapper[4959]: I0121 13:22:12.298140 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c" event={"ID":"3d19cedc-8035-48c5-9702-3670bcf397dc","Type":"ContainerDied","Data":"a27b707c782aaf4f07c2c9af20296351bf4a58f815c76d859613929cc6ad9a13"} Jan 21 13:22:12 crc kubenswrapper[4959]: I0121 13:22:12.298257 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a27b707c782aaf4f07c2c9af20296351bf4a58f815c76d859613929cc6ad9a13" Jan 21 13:22:12 crc kubenswrapper[4959]: I0121 13:22:12.299729 4959 generic.go:334] "Generic (PLEG): container finished" podID="09f7629c-5924-4bdb-be94-335fb25a5149" containerID="becc5c9f1f5fa9c6f430149c7bf91baa7b432b181386d2e35541b7e06638e5b3" exitCode=0 Jan 21 13:22:12 crc kubenswrapper[4959]: I0121 13:22:12.299797 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerDied","Data":"becc5c9f1f5fa9c6f430149c7bf91baa7b432b181386d2e35541b7e06638e5b3"} Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.318669 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerStarted","Data":"7657e71305b91002c9189de00107d6fcc14a02ed0de6c216c4672146c99bc106"} Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.338215 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c9wjh" podStartSLOduration=3.4837447 podStartE2EDuration="8.338193038s" podCreationTimestamp="2026-01-21 13:22:07 +0000 UTC" firstStartedPulling="2026-01-21 13:22:09.273834787 +0000 UTC m=+790.236865330" lastFinishedPulling="2026-01-21 13:22:14.128283135 +0000 UTC m=+795.091313668" observedRunningTime="2026-01-21 13:22:15.333685554 +0000 UTC m=+796.296716117" watchObservedRunningTime="2026-01-21 13:22:15.338193038 +0000 UTC m=+796.301223581" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.570933 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6wfg2"] Jan 21 13:22:15 crc kubenswrapper[4959]: E0121 13:22:15.571158 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="extract" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.571169 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="extract" Jan 21 13:22:15 crc kubenswrapper[4959]: E0121 13:22:15.571180 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="util" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.571186 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="util" Jan 21 13:22:15 crc kubenswrapper[4959]: E0121 13:22:15.571200 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="pull" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.571206 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="pull" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.571291 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d19cedc-8035-48c5-9702-3670bcf397dc" containerName="extract" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.571656 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.572929 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ql5rj" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.573303 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.574804 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.588939 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6wfg2"] Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.653341 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnwcn\" (UniqueName: \"kubernetes.io/projected/3640732f-6cfa-4b56-a153-bfdc00a70169-kube-api-access-pnwcn\") pod \"nmstate-operator-646758c888-6wfg2\" (UID: \"3640732f-6cfa-4b56-a153-bfdc00a70169\") " pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.754797 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnwcn\" (UniqueName: \"kubernetes.io/projected/3640732f-6cfa-4b56-a153-bfdc00a70169-kube-api-access-pnwcn\") pod \"nmstate-operator-646758c888-6wfg2\" (UID: \"3640732f-6cfa-4b56-a153-bfdc00a70169\") " pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.781572 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnwcn\" (UniqueName: \"kubernetes.io/projected/3640732f-6cfa-4b56-a153-bfdc00a70169-kube-api-access-pnwcn\") pod \"nmstate-operator-646758c888-6wfg2\" (UID: \"3640732f-6cfa-4b56-a153-bfdc00a70169\") " pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" Jan 21 13:22:15 crc kubenswrapper[4959]: I0121 13:22:15.886241 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" Jan 21 13:22:16 crc kubenswrapper[4959]: I0121 13:22:16.105215 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6wfg2"] Jan 21 13:22:16 crc kubenswrapper[4959]: W0121 13:22:16.107807 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3640732f_6cfa_4b56_a153_bfdc00a70169.slice/crio-0f69eaf35d7935431303d7a7273b9f256928f0977b2d4f51404b4e10e75ee398 WatchSource:0}: Error finding container 0f69eaf35d7935431303d7a7273b9f256928f0977b2d4f51404b4e10e75ee398: Status 404 returned error can't find the container with id 0f69eaf35d7935431303d7a7273b9f256928f0977b2d4f51404b4e10e75ee398 Jan 21 13:22:16 crc kubenswrapper[4959]: I0121 13:22:16.327079 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" event={"ID":"3640732f-6cfa-4b56-a153-bfdc00a70169","Type":"ContainerStarted","Data":"0f69eaf35d7935431303d7a7273b9f256928f0977b2d4f51404b4e10e75ee398"} Jan 21 13:22:17 crc kubenswrapper[4959]: I0121 13:22:17.807277 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:17 crc kubenswrapper[4959]: I0121 13:22:17.807637 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:18 crc kubenswrapper[4959]: I0121 13:22:18.846496 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9wjh" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="registry-server" probeResult="failure" output=< Jan 21 13:22:18 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:22:18 crc kubenswrapper[4959]: > Jan 21 13:22:27 crc kubenswrapper[4959]: I0121 13:22:27.851558 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:28 crc kubenswrapper[4959]: I0121 13:22:28.017822 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:28 crc kubenswrapper[4959]: I0121 13:22:28.084506 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9wjh"] Jan 21 13:22:28 crc kubenswrapper[4959]: E0121 13:22:28.599457 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:cc7947907fa500c8feff6eba1bd58f57d0e3731ffe109ac36752d80a97d6181a" Jan 21 13:22:28 crc kubenswrapper[4959]: E0121 13:22:28.599675 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nmstate-operator,Image:registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:cc7947907fa500c8feff6eba1bd58f57d0e3731ffe109ac36752d80a97d6181a,Command:[manager],Args:[--zap-time-encoding=iso8601],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:kubernetes-nmstate-operator,ValueFrom:nil,},EnvVar{Name:ENABLE_PROFILER,Value:False,ValueFrom:nil,},EnvVar{Name:PROFILER_PORT,Value:6060,ValueFrom:nil,},EnvVar{Name:RUN_OPERATOR,Value:,ValueFrom:nil,},EnvVar{Name:HANDLER_IMAGE,Value:registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e055f7d3e851e17b63d9afac4ed5a5e826f53eacadd4a958293d876780aeb1a8,ValueFrom:nil,},EnvVar{Name:PLUGIN_IMAGE,Value:registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:8a170123e208637736974d1a706f8b51b6a0cc28e428664e7d9eb3bff18d9140,ValueFrom:nil,},EnvVar{Name:HANDLER_IMAGE_PULL_POLICY,Value:Always,ValueFrom:nil,},EnvVar{Name:HANDLER_NAMESPACE,Value:openshift-nmstate,ValueFrom:nil,},EnvVar{Name:MONITORING_NAMESPACE,Value:openshift-monitoring,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:86800d7a823cf444db8393dd7ffa735b2e42e9120f3f869487b0a2ed6b0db73d,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:kubernetes-nmstate-operator.4.18.0-202601072343,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{60 -3} {} 60m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnwcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nmstate-operator-646758c888-6wfg2_openshift-nmstate(3640732f-6cfa-4b56-a153-bfdc00a70169): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:22:28 crc kubenswrapper[4959]: E0121 13:22:28.601010 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nmstate-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" podUID="3640732f-6cfa-4b56-a153-bfdc00a70169" Jan 21 13:22:29 crc kubenswrapper[4959]: I0121 13:22:29.388069 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c9wjh" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="registry-server" containerID="cri-o://7657e71305b91002c9189de00107d6fcc14a02ed0de6c216c4672146c99bc106" gracePeriod=2 Jan 21 13:22:29 crc kubenswrapper[4959]: E0121 13:22:29.390346 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nmstate-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:cc7947907fa500c8feff6eba1bd58f57d0e3731ffe109ac36752d80a97d6181a\\\"\"" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" podUID="3640732f-6cfa-4b56-a153-bfdc00a70169" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.408694 4959 generic.go:334] "Generic (PLEG): container finished" podID="09f7629c-5924-4bdb-be94-335fb25a5149" containerID="7657e71305b91002c9189de00107d6fcc14a02ed0de6c216c4672146c99bc106" exitCode=0 Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.408739 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerDied","Data":"7657e71305b91002c9189de00107d6fcc14a02ed0de6c216c4672146c99bc106"} Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.656998 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.757864 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-catalog-content\") pod \"09f7629c-5924-4bdb-be94-335fb25a5149\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.757966 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxmj\" (UniqueName: \"kubernetes.io/projected/09f7629c-5924-4bdb-be94-335fb25a5149-kube-api-access-fbxmj\") pod \"09f7629c-5924-4bdb-be94-335fb25a5149\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.758050 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-utilities\") pod \"09f7629c-5924-4bdb-be94-335fb25a5149\" (UID: \"09f7629c-5924-4bdb-be94-335fb25a5149\") " Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.758980 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-utilities" (OuterVolumeSpecName: "utilities") pod "09f7629c-5924-4bdb-be94-335fb25a5149" (UID: "09f7629c-5924-4bdb-be94-335fb25a5149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.767308 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f7629c-5924-4bdb-be94-335fb25a5149-kube-api-access-fbxmj" (OuterVolumeSpecName: "kube-api-access-fbxmj") pod "09f7629c-5924-4bdb-be94-335fb25a5149" (UID: "09f7629c-5924-4bdb-be94-335fb25a5149"). InnerVolumeSpecName "kube-api-access-fbxmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.859754 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.859800 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxmj\" (UniqueName: \"kubernetes.io/projected/09f7629c-5924-4bdb-be94-335fb25a5149-kube-api-access-fbxmj\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.882901 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09f7629c-5924-4bdb-be94-335fb25a5149" (UID: "09f7629c-5924-4bdb-be94-335fb25a5149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:31 crc kubenswrapper[4959]: I0121 13:22:31.960536 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f7629c-5924-4bdb-be94-335fb25a5149-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.416349 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wjh" event={"ID":"09f7629c-5924-4bdb-be94-335fb25a5149","Type":"ContainerDied","Data":"ed6cc9660a1753facf532b7a612085cb93b8afa702f0b256c0aed6a6c0e2e845"} Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.416406 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wjh" Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.416414 4959 scope.go:117] "RemoveContainer" containerID="7657e71305b91002c9189de00107d6fcc14a02ed0de6c216c4672146c99bc106" Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.445240 4959 scope.go:117] "RemoveContainer" containerID="becc5c9f1f5fa9c6f430149c7bf91baa7b432b181386d2e35541b7e06638e5b3" Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.445812 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9wjh"] Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.450331 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c9wjh"] Jan 21 13:22:32 crc kubenswrapper[4959]: I0121 13:22:32.462526 4959 scope.go:117] "RemoveContainer" containerID="e028b55dfdde852f6d74e68d575ada7043dc7a689d33911df923c6023fba5bb2" Jan 21 13:22:33 crc kubenswrapper[4959]: I0121 13:22:33.293277 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" path="/var/lib/kubelet/pods/09f7629c-5924-4bdb-be94-335fb25a5149/volumes" Jan 21 13:22:45 crc kubenswrapper[4959]: I0121 13:22:45.483136 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" event={"ID":"3640732f-6cfa-4b56-a153-bfdc00a70169","Type":"ContainerStarted","Data":"0775f41e811730e07a59edc57163e688710f134b8613f1241abd552892b5953c"} Jan 21 13:22:45 crc kubenswrapper[4959]: I0121 13:22:45.503808 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-6wfg2" podStartSLOduration=2.215936603 podStartE2EDuration="30.503787768s" podCreationTimestamp="2026-01-21 13:22:15 +0000 UTC" firstStartedPulling="2026-01-21 13:22:16.110107243 +0000 UTC m=+797.073137786" lastFinishedPulling="2026-01-21 13:22:44.397958408 +0000 UTC m=+825.360988951" observedRunningTime="2026-01-21 13:22:45.502524203 +0000 UTC m=+826.465554746" watchObservedRunningTime="2026-01-21 13:22:45.503787768 +0000 UTC m=+826.466818311" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.511857 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-sspgk"] Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.519209 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="extract-content" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.519258 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="extract-content" Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.519266 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="registry-server" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.519272 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="registry-server" Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.519295 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="extract-utilities" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.519301 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="extract-utilities" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.519552 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f7629c-5924-4bdb-be94-335fb25a5149" containerName="registry-server" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.521461 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.527776 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lwdfw" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.562045 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.562976 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.574022 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.600379 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-sspgk"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.607479 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jr2mn"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.608217 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.637691 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.700451 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.701177 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704251 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-v5dc4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704394 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704605 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704694 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfp62\" (UniqueName: \"kubernetes.io/projected/e5a1db10-de2f-423d-a482-087eb1eaf3d0-kube-api-access-kfp62\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704741 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-dbus-socket\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704762 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-ovs-socket\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704792 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kb6h\" (UniqueName: \"kubernetes.io/projected/eb1df3cf-b716-4606-8f5b-fb2f1631e5fa-kube-api-access-9kb6h\") pod \"nmstate-metrics-54757c584b-sspgk\" (UID: \"eb1df3cf-b716-4606-8f5b-fb2f1631e5fa\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704813 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhv5j\" (UniqueName: \"kubernetes.io/projected/62025af4-bb62-4cbd-a420-77ce0cbea9ff-kube-api-access-nhv5j\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704846 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-nmstate-lock\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.704864 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62025af4-bb62-4cbd-a420-77ce0cbea9ff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.713247 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806392 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9616773b-3c4f-4141-871b-0d35828d0d52-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806445 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-dbus-socket\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806465 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-ovs-socket\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806488 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kb6h\" (UniqueName: \"kubernetes.io/projected/eb1df3cf-b716-4606-8f5b-fb2f1631e5fa-kube-api-access-9kb6h\") pod \"nmstate-metrics-54757c584b-sspgk\" (UID: \"eb1df3cf-b716-4606-8f5b-fb2f1631e5fa\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806511 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhv5j\" (UniqueName: \"kubernetes.io/projected/62025af4-bb62-4cbd-a420-77ce0cbea9ff-kube-api-access-nhv5j\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806534 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-ovs-socket\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806546 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9616773b-3c4f-4141-871b-0d35828d0d52-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806724 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-nmstate-lock\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806732 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-dbus-socket\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806773 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62025af4-bb62-4cbd-a420-77ce0cbea9ff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806821 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e5a1db10-de2f-423d-a482-087eb1eaf3d0-nmstate-lock\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806822 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtd7\" (UniqueName: \"kubernetes.io/projected/9616773b-3c4f-4141-871b-0d35828d0d52-kube-api-access-fwtd7\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.806912 4959 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.806930 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfp62\" (UniqueName: \"kubernetes.io/projected/e5a1db10-de2f-423d-a482-087eb1eaf3d0-kube-api-access-kfp62\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.806965 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62025af4-bb62-4cbd-a420-77ce0cbea9ff-tls-key-pair podName:62025af4-bb62-4cbd-a420-77ce0cbea9ff nodeName:}" failed. No retries permitted until 2026-01-21 13:22:47.306945575 +0000 UTC m=+828.269976108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/62025af4-bb62-4cbd-a420-77ce0cbea9ff-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-g5bl4" (UID: "62025af4-bb62-4cbd-a420-77ce0cbea9ff") : secret "openshift-nmstate-webhook" not found Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.828230 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhv5j\" (UniqueName: \"kubernetes.io/projected/62025af4-bb62-4cbd-a420-77ce0cbea9ff-kube-api-access-nhv5j\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.834852 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfp62\" (UniqueName: \"kubernetes.io/projected/e5a1db10-de2f-423d-a482-087eb1eaf3d0-kube-api-access-kfp62\") pod \"nmstate-handler-jr2mn\" (UID: \"e5a1db10-de2f-423d-a482-087eb1eaf3d0\") " pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.839499 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kb6h\" (UniqueName: \"kubernetes.io/projected/eb1df3cf-b716-4606-8f5b-fb2f1631e5fa-kube-api-access-9kb6h\") pod \"nmstate-metrics-54757c584b-sspgk\" (UID: \"eb1df3cf-b716-4606-8f5b-fb2f1631e5fa\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.875681 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.908173 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9616773b-3c4f-4141-871b-0d35828d0d52-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.908262 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtd7\" (UniqueName: \"kubernetes.io/projected/9616773b-3c4f-4141-871b-0d35828d0d52-kube-api-access-fwtd7\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.908308 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9616773b-3c4f-4141-871b-0d35828d0d52-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.908449 4959 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 13:22:46 crc kubenswrapper[4959]: E0121 13:22:46.908516 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9616773b-3c4f-4141-871b-0d35828d0d52-plugin-serving-cert podName:9616773b-3c4f-4141-871b-0d35828d0d52 nodeName:}" failed. No retries permitted until 2026-01-21 13:22:47.408499388 +0000 UTC m=+828.371529931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9616773b-3c4f-4141-871b-0d35828d0d52-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-qwnmg" (UID: "9616773b-3c4f-4141-871b-0d35828d0d52") : secret "plugin-serving-cert" not found Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.909302 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9616773b-3c4f-4141-871b-0d35828d0d52-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.913934 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-578db8b447-r5vmq"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.914671 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.927862 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-578db8b447-r5vmq"] Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.937329 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtd7\" (UniqueName: \"kubernetes.io/projected/9616773b-3c4f-4141-871b-0d35828d0d52-kube-api-access-fwtd7\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:46 crc kubenswrapper[4959]: I0121 13:22:46.975505 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.009670 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-config\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.009783 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84gxv\" (UniqueName: \"kubernetes.io/projected/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-kube-api-access-84gxv\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.009921 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-serving-cert\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.010069 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-trusted-ca-bundle\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.010134 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-service-ca\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.010162 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-oauth-serving-cert\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.010235 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-oauth-config\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.111927 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-config\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.112327 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84gxv\" (UniqueName: \"kubernetes.io/projected/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-kube-api-access-84gxv\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.112367 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-serving-cert\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.112422 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-trusted-ca-bundle\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.112453 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-service-ca\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.112475 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-oauth-serving-cert\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.112505 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-oauth-config\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.113949 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-service-ca\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.113976 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-trusted-ca-bundle\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.114909 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-oauth-serving-cert\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.115549 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-config\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.117358 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-oauth-config\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.122980 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-console-serving-cert\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.128012 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84gxv\" (UniqueName: \"kubernetes.io/projected/1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b-kube-api-access-84gxv\") pod \"console-578db8b447-r5vmq\" (UID: \"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b\") " pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.299487 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.315286 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62025af4-bb62-4cbd-a420-77ce0cbea9ff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.318236 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/62025af4-bb62-4cbd-a420-77ce0cbea9ff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-g5bl4\" (UID: \"62025af4-bb62-4cbd-a420-77ce0cbea9ff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.367533 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-sspgk"] Jan 21 13:22:47 crc kubenswrapper[4959]: W0121 13:22:47.378245 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb1df3cf_b716_4606_8f5b_fb2f1631e5fa.slice/crio-55ea38fe0b22da81b90201b15ca8eb2b4a6c3456c6224b5a3f67392fc1457b95 WatchSource:0}: Error finding container 55ea38fe0b22da81b90201b15ca8eb2b4a6c3456c6224b5a3f67392fc1457b95: Status 404 returned error can't find the container with id 55ea38fe0b22da81b90201b15ca8eb2b4a6c3456c6224b5a3f67392fc1457b95 Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.417164 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9616773b-3c4f-4141-871b-0d35828d0d52-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.421433 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9616773b-3c4f-4141-871b-0d35828d0d52-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qwnmg\" (UID: \"9616773b-3c4f-4141-871b-0d35828d0d52\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.474525 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-578db8b447-r5vmq"] Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.493576 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.494187 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jr2mn" event={"ID":"e5a1db10-de2f-423d-a482-087eb1eaf3d0","Type":"ContainerStarted","Data":"92950b43f15b2448d7bb2fe103584e469ba7480b9e6af3e031d980a2ca3d3810"} Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.495518 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" event={"ID":"eb1df3cf-b716-4606-8f5b-fb2f1631e5fa","Type":"ContainerStarted","Data":"55ea38fe0b22da81b90201b15ca8eb2b4a6c3456c6224b5a3f67392fc1457b95"} Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.496547 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-578db8b447-r5vmq" event={"ID":"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b","Type":"ContainerStarted","Data":"544a1c740786b4bca09e17440322ceb20856415651171c9c475cfc95c22ef98c"} Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.618592 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.674016 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4"] Jan 21 13:22:47 crc kubenswrapper[4959]: W0121 13:22:47.679140 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62025af4_bb62_4cbd_a420_77ce0cbea9ff.slice/crio-d13551614f253a1aa1322cc07c8d69f54e268088e6c54db190bcce8f49f546b8 WatchSource:0}: Error finding container d13551614f253a1aa1322cc07c8d69f54e268088e6c54db190bcce8f49f546b8: Status 404 returned error can't find the container with id d13551614f253a1aa1322cc07c8d69f54e268088e6c54db190bcce8f49f546b8 Jan 21 13:22:47 crc kubenswrapper[4959]: I0121 13:22:47.999136 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg"] Jan 21 13:22:48 crc kubenswrapper[4959]: W0121 13:22:48.007855 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9616773b_3c4f_4141_871b_0d35828d0d52.slice/crio-4d7a07f7c61c6a89fc54766740e37e182f225162d1219452ffb1fbe739154cd2 WatchSource:0}: Error finding container 4d7a07f7c61c6a89fc54766740e37e182f225162d1219452ffb1fbe739154cd2: Status 404 returned error can't find the container with id 4d7a07f7c61c6a89fc54766740e37e182f225162d1219452ffb1fbe739154cd2 Jan 21 13:22:48 crc kubenswrapper[4959]: I0121 13:22:48.503896 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" event={"ID":"9616773b-3c4f-4141-871b-0d35828d0d52","Type":"ContainerStarted","Data":"4d7a07f7c61c6a89fc54766740e37e182f225162d1219452ffb1fbe739154cd2"} Jan 21 13:22:48 crc kubenswrapper[4959]: I0121 13:22:48.505408 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" event={"ID":"62025af4-bb62-4cbd-a420-77ce0cbea9ff","Type":"ContainerStarted","Data":"d13551614f253a1aa1322cc07c8d69f54e268088e6c54db190bcce8f49f546b8"} Jan 21 13:22:50 crc kubenswrapper[4959]: I0121 13:22:50.521601 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-578db8b447-r5vmq" event={"ID":"1e0e57ce-dfd8-41d1-ae65-15b0ba52c69b","Type":"ContainerStarted","Data":"3baaeb483c91ebbe6d5d1d2462fa4e616ebe58cde1374f382a1babe7f7093e13"} Jan 21 13:22:50 crc kubenswrapper[4959]: I0121 13:22:50.546939 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-578db8b447-r5vmq" podStartSLOduration=4.546919836 podStartE2EDuration="4.546919836s" podCreationTimestamp="2026-01-21 13:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:50.538085702 +0000 UTC m=+831.501116265" watchObservedRunningTime="2026-01-21 13:22:50.546919836 +0000 UTC m=+831.509950369" Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.535210 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jr2mn" event={"ID":"e5a1db10-de2f-423d-a482-087eb1eaf3d0","Type":"ContainerStarted","Data":"7f4395c62fc540cc8a1bdfb733202e7bc35849d80837a2828257ff7d5dd40869"} Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.535937 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.537968 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" event={"ID":"62025af4-bb62-4cbd-a420-77ce0cbea9ff","Type":"ContainerStarted","Data":"3459f2664fc75e6b659c3e7b199498cdffb64ef34896d63559b49fffe22ed009"} Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.538405 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.545304 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" event={"ID":"eb1df3cf-b716-4606-8f5b-fb2f1631e5fa","Type":"ContainerStarted","Data":"49499c82f1ea5cb3694a8baaf217ceb6bfa7d016ac24bab5ac79cc05936945d9"} Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.555298 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jr2mn" podStartSLOduration=1.610682348 podStartE2EDuration="6.555280046s" podCreationTimestamp="2026-01-21 13:22:46 +0000 UTC" firstStartedPulling="2026-01-21 13:22:47.005273239 +0000 UTC m=+827.968303792" lastFinishedPulling="2026-01-21 13:22:51.949870947 +0000 UTC m=+832.912901490" observedRunningTime="2026-01-21 13:22:52.550668429 +0000 UTC m=+833.513698982" watchObservedRunningTime="2026-01-21 13:22:52.555280046 +0000 UTC m=+833.518310589" Jan 21 13:22:52 crc kubenswrapper[4959]: I0121 13:22:52.617315 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" podStartSLOduration=2.341646551 podStartE2EDuration="6.617293416s" podCreationTimestamp="2026-01-21 13:22:46 +0000 UTC" firstStartedPulling="2026-01-21 13:22:47.692140766 +0000 UTC m=+828.655171309" lastFinishedPulling="2026-01-21 13:22:51.967787631 +0000 UTC m=+832.930818174" observedRunningTime="2026-01-21 13:22:52.61163095 +0000 UTC m=+833.574661493" watchObservedRunningTime="2026-01-21 13:22:52.617293416 +0000 UTC m=+833.580323979" Jan 21 13:22:53 crc kubenswrapper[4959]: I0121 13:22:53.553172 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" event={"ID":"9616773b-3c4f-4141-871b-0d35828d0d52","Type":"ContainerStarted","Data":"147d2cfe9c980467820ccdd2aeca10ee4ba7c3c3aa64008b305e6ab9252668d7"} Jan 21 13:22:53 crc kubenswrapper[4959]: I0121 13:22:53.579075 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qwnmg" podStartSLOduration=2.514922665 podStartE2EDuration="7.579047251s" podCreationTimestamp="2026-01-21 13:22:46 +0000 UTC" firstStartedPulling="2026-01-21 13:22:48.010354999 +0000 UTC m=+828.973385542" lastFinishedPulling="2026-01-21 13:22:53.074479585 +0000 UTC m=+834.037510128" observedRunningTime="2026-01-21 13:22:53.571327387 +0000 UTC m=+834.534357930" watchObservedRunningTime="2026-01-21 13:22:53.579047251 +0000 UTC m=+834.542077794" Jan 21 13:22:56 crc kubenswrapper[4959]: I0121 13:22:56.579965 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" event={"ID":"eb1df3cf-b716-4606-8f5b-fb2f1631e5fa","Type":"ContainerStarted","Data":"89a581c352a3e0cda1153aaf35e100abde04c968d1757d23356bdd0d2023fe94"} Jan 21 13:22:56 crc kubenswrapper[4959]: I0121 13:22:56.601373 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-sspgk" podStartSLOduration=2.298035427 podStartE2EDuration="10.601352243s" podCreationTimestamp="2026-01-21 13:22:46 +0000 UTC" firstStartedPulling="2026-01-21 13:22:47.381662207 +0000 UTC m=+828.344692750" lastFinishedPulling="2026-01-21 13:22:55.684979023 +0000 UTC m=+836.648009566" observedRunningTime="2026-01-21 13:22:56.59689404 +0000 UTC m=+837.559924713" watchObservedRunningTime="2026-01-21 13:22:56.601352243 +0000 UTC m=+837.564382786" Jan 21 13:22:57 crc kubenswrapper[4959]: I0121 13:22:57.300745 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:57 crc kubenswrapper[4959]: I0121 13:22:57.300811 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:57 crc kubenswrapper[4959]: I0121 13:22:57.304779 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:57 crc kubenswrapper[4959]: I0121 13:22:57.590048 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-578db8b447-r5vmq" Jan 21 13:22:57 crc kubenswrapper[4959]: I0121 13:22:57.654270 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5qxvm"] Jan 21 13:23:01 crc kubenswrapper[4959]: I0121 13:23:01.997828 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jr2mn" Jan 21 13:23:07 crc kubenswrapper[4959]: I0121 13:23:07.499797 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-g5bl4" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.829954 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878"] Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.831845 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.833563 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.841895 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878"] Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.887284 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.887336 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.887370 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7nl\" (UniqueName: \"kubernetes.io/projected/00065f84-3765-45af-b9ee-9b8b83ebc1b8-kube-api-access-xh7nl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.988403 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.988476 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.988519 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7nl\" (UniqueName: \"kubernetes.io/projected/00065f84-3765-45af-b9ee-9b8b83ebc1b8-kube-api-access-xh7nl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.989504 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:20 crc kubenswrapper[4959]: I0121 13:23:20.989710 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:21 crc kubenswrapper[4959]: I0121 13:23:21.009267 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7nl\" (UniqueName: \"kubernetes.io/projected/00065f84-3765-45af-b9ee-9b8b83ebc1b8-kube-api-access-xh7nl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:21 crc kubenswrapper[4959]: I0121 13:23:21.152777 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:21 crc kubenswrapper[4959]: I0121 13:23:21.342606 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878"] Jan 21 13:23:21 crc kubenswrapper[4959]: I0121 13:23:21.724047 4959 generic.go:334] "Generic (PLEG): container finished" podID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerID="8b91c54f6c2a4e0794011daae05a09c1e3c8971c1f1f809ec3178af7c96c2c9d" exitCode=0 Jan 21 13:23:21 crc kubenswrapper[4959]: I0121 13:23:21.724248 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" event={"ID":"00065f84-3765-45af-b9ee-9b8b83ebc1b8","Type":"ContainerDied","Data":"8b91c54f6c2a4e0794011daae05a09c1e3c8971c1f1f809ec3178af7c96c2c9d"} Jan 21 13:23:21 crc kubenswrapper[4959]: I0121 13:23:21.724425 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" event={"ID":"00065f84-3765-45af-b9ee-9b8b83ebc1b8","Type":"ContainerStarted","Data":"59a355a3e93998ef8aea326dacc98f912c6cf0b4dd7baabbc99a161c5db6550c"} Jan 21 13:23:22 crc kubenswrapper[4959]: I0121 13:23:22.699588 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5qxvm" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerName="console" containerID="cri-o://aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898" gracePeriod=15 Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.090606 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5qxvm_277cb73f-7c9e-46e0-bb04-4baea31ec998/console/0.log" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.091477 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216634 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-serving-cert\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216692 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt6x2\" (UniqueName: \"kubernetes.io/projected/277cb73f-7c9e-46e0-bb04-4baea31ec998-kube-api-access-xt6x2\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216721 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-trusted-ca-bundle\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216773 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-config\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216796 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-oauth-config\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216840 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-service-ca\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.216910 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-oauth-serving-cert\") pod \"277cb73f-7c9e-46e0-bb04-4baea31ec998\" (UID: \"277cb73f-7c9e-46e0-bb04-4baea31ec998\") " Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217419 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-service-ca" (OuterVolumeSpecName: "service-ca") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217438 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217426 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217457 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-config" (OuterVolumeSpecName: "console-config") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217665 4959 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217690 4959 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217701 4959 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.217713 4959 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277cb73f-7c9e-46e0-bb04-4baea31ec998-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.222647 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.222938 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277cb73f-7c9e-46e0-bb04-4baea31ec998-kube-api-access-xt6x2" (OuterVolumeSpecName: "kube-api-access-xt6x2") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "kube-api-access-xt6x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.223065 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "277cb73f-7c9e-46e0-bb04-4baea31ec998" (UID: "277cb73f-7c9e-46e0-bb04-4baea31ec998"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.319178 4959 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.319518 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt6x2\" (UniqueName: \"kubernetes.io/projected/277cb73f-7c9e-46e0-bb04-4baea31ec998-kube-api-access-xt6x2\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.319532 4959 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277cb73f-7c9e-46e0-bb04-4baea31ec998-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.734569 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5qxvm_277cb73f-7c9e-46e0-bb04-4baea31ec998/console/0.log" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.734629 4959 generic.go:334] "Generic (PLEG): container finished" podID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerID="aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898" exitCode=2 Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.734674 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qxvm" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.734691 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qxvm" event={"ID":"277cb73f-7c9e-46e0-bb04-4baea31ec998","Type":"ContainerDied","Data":"aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898"} Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.734718 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qxvm" event={"ID":"277cb73f-7c9e-46e0-bb04-4baea31ec998","Type":"ContainerDied","Data":"26dd0e135cf72beabb396d8552b72290a4256fa780a88cab04e6aecb90861f8f"} Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.734738 4959 scope.go:117] "RemoveContainer" containerID="aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.737349 4959 generic.go:334] "Generic (PLEG): container finished" podID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerID="d68b9304399687eb9ef36f6abfec962e114a06799bc7a9f93af90dae7ec6a222" exitCode=0 Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.737372 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" event={"ID":"00065f84-3765-45af-b9ee-9b8b83ebc1b8","Type":"ContainerDied","Data":"d68b9304399687eb9ef36f6abfec962e114a06799bc7a9f93af90dae7ec6a222"} Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.753567 4959 scope.go:117] "RemoveContainer" containerID="aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898" Jan 21 13:23:23 crc kubenswrapper[4959]: E0121 13:23:23.754164 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898\": container with ID starting with aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898 not found: ID does not exist" containerID="aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.754223 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898"} err="failed to get container status \"aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898\": rpc error: code = NotFound desc = could not find container \"aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898\": container with ID starting with aa1831e6ce3e30a5b739986c3a09c3b167aaf33bcadf933bac11998400c16898 not found: ID does not exist" Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.759426 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5qxvm"] Jan 21 13:23:23 crc kubenswrapper[4959]: I0121 13:23:23.763010 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5qxvm"] Jan 21 13:23:24 crc kubenswrapper[4959]: I0121 13:23:24.757644 4959 generic.go:334] "Generic (PLEG): container finished" podID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerID="f2ced26da3974396a52dadf14245c2b93a5a18f9d64cbbd6c46ba332e757e064" exitCode=0 Jan 21 13:23:24 crc kubenswrapper[4959]: I0121 13:23:24.757698 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" event={"ID":"00065f84-3765-45af-b9ee-9b8b83ebc1b8","Type":"ContainerDied","Data":"f2ced26da3974396a52dadf14245c2b93a5a18f9d64cbbd6c46ba332e757e064"} Jan 21 13:23:25 crc kubenswrapper[4959]: I0121 13:23:25.293765 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" path="/var/lib/kubelet/pods/277cb73f-7c9e-46e0-bb04-4baea31ec998/volumes" Jan 21 13:23:25 crc kubenswrapper[4959]: I0121 13:23:25.988918 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.054749 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-util\") pod \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.054824 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-bundle\") pod \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.054906 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7nl\" (UniqueName: \"kubernetes.io/projected/00065f84-3765-45af-b9ee-9b8b83ebc1b8-kube-api-access-xh7nl\") pod \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\" (UID: \"00065f84-3765-45af-b9ee-9b8b83ebc1b8\") " Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.056706 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-bundle" (OuterVolumeSpecName: "bundle") pod "00065f84-3765-45af-b9ee-9b8b83ebc1b8" (UID: "00065f84-3765-45af-b9ee-9b8b83ebc1b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.064337 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00065f84-3765-45af-b9ee-9b8b83ebc1b8-kube-api-access-xh7nl" (OuterVolumeSpecName: "kube-api-access-xh7nl") pod "00065f84-3765-45af-b9ee-9b8b83ebc1b8" (UID: "00065f84-3765-45af-b9ee-9b8b83ebc1b8"). InnerVolumeSpecName "kube-api-access-xh7nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.068909 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-util" (OuterVolumeSpecName: "util") pod "00065f84-3765-45af-b9ee-9b8b83ebc1b8" (UID: "00065f84-3765-45af-b9ee-9b8b83ebc1b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.156755 4959 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-util\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.156798 4959 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00065f84-3765-45af-b9ee-9b8b83ebc1b8-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.156814 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7nl\" (UniqueName: \"kubernetes.io/projected/00065f84-3765-45af-b9ee-9b8b83ebc1b8-kube-api-access-xh7nl\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.773935 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" event={"ID":"00065f84-3765-45af-b9ee-9b8b83ebc1b8","Type":"ContainerDied","Data":"59a355a3e93998ef8aea326dacc98f912c6cf0b4dd7baabbc99a161c5db6550c"} Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.774514 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59a355a3e93998ef8aea326dacc98f912c6cf0b4dd7baabbc99a161c5db6550c" Jan 21 13:23:26 crc kubenswrapper[4959]: I0121 13:23:26.774069 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.356896 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl"] Jan 21 13:23:35 crc kubenswrapper[4959]: E0121 13:23:35.357692 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="pull" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.357709 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="pull" Jan 21 13:23:35 crc kubenswrapper[4959]: E0121 13:23:35.357736 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="extract" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.357744 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="extract" Jan 21 13:23:35 crc kubenswrapper[4959]: E0121 13:23:35.357756 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerName="console" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.357765 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerName="console" Jan 21 13:23:35 crc kubenswrapper[4959]: E0121 13:23:35.357774 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="util" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.357781 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="util" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.357906 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="00065f84-3765-45af-b9ee-9b8b83ebc1b8" containerName="extract" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.357928 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="277cb73f-7c9e-46e0-bb04-4baea31ec998" containerName="console" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.358465 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.361532 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.361577 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.363157 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.364846 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.376430 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4bwf9" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.405185 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl"] Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.475021 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dd40e28-d4df-4cae-b104-773876261939-apiservice-cert\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.475120 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvp5x\" (UniqueName: \"kubernetes.io/projected/6dd40e28-d4df-4cae-b104-773876261939-kube-api-access-jvp5x\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.475177 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dd40e28-d4df-4cae-b104-773876261939-webhook-cert\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.576594 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dd40e28-d4df-4cae-b104-773876261939-apiservice-cert\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.576674 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvp5x\" (UniqueName: \"kubernetes.io/projected/6dd40e28-d4df-4cae-b104-773876261939-kube-api-access-jvp5x\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.576726 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dd40e28-d4df-4cae-b104-773876261939-webhook-cert\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.587553 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6dd40e28-d4df-4cae-b104-773876261939-apiservice-cert\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.587553 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6dd40e28-d4df-4cae-b104-773876261939-webhook-cert\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.604809 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvp5x\" (UniqueName: \"kubernetes.io/projected/6dd40e28-d4df-4cae-b104-773876261939-kube-api-access-jvp5x\") pod \"metallb-operator-controller-manager-55559dddc4-94kkl\" (UID: \"6dd40e28-d4df-4cae-b104-773876261939\") " pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.673272 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.675536 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x"] Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.676275 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.677820 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.678189 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hx5r9" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.683277 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.692013 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x"] Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.782000 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b055f38-5f56-4bb9-bfd6-25fb04003144-apiservice-cert\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.782058 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b055f38-5f56-4bb9-bfd6-25fb04003144-webhook-cert\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.782086 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56cc\" (UniqueName: \"kubernetes.io/projected/6b055f38-5f56-4bb9-bfd6-25fb04003144-kube-api-access-n56cc\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.883281 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b055f38-5f56-4bb9-bfd6-25fb04003144-apiservice-cert\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.883621 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b055f38-5f56-4bb9-bfd6-25fb04003144-webhook-cert\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.883650 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56cc\" (UniqueName: \"kubernetes.io/projected/6b055f38-5f56-4bb9-bfd6-25fb04003144-kube-api-access-n56cc\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.889042 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b055f38-5f56-4bb9-bfd6-25fb04003144-apiservice-cert\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.890601 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b055f38-5f56-4bb9-bfd6-25fb04003144-webhook-cert\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:35 crc kubenswrapper[4959]: I0121 13:23:35.905993 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56cc\" (UniqueName: \"kubernetes.io/projected/6b055f38-5f56-4bb9-bfd6-25fb04003144-kube-api-access-n56cc\") pod \"metallb-operator-webhook-server-8489dff5dc-5dz6x\" (UID: \"6b055f38-5f56-4bb9-bfd6-25fb04003144\") " pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:36 crc kubenswrapper[4959]: I0121 13:23:36.034757 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:36 crc kubenswrapper[4959]: I0121 13:23:36.145500 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl"] Jan 21 13:23:36 crc kubenswrapper[4959]: W0121 13:23:36.157225 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd40e28_d4df_4cae_b104_773876261939.slice/crio-74158cbd884bc348c1c40b9e5003a374435e324c5136d717f28c942eacdb4307 WatchSource:0}: Error finding container 74158cbd884bc348c1c40b9e5003a374435e324c5136d717f28c942eacdb4307: Status 404 returned error can't find the container with id 74158cbd884bc348c1c40b9e5003a374435e324c5136d717f28c942eacdb4307 Jan 21 13:23:36 crc kubenswrapper[4959]: I0121 13:23:36.282401 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x"] Jan 21 13:23:36 crc kubenswrapper[4959]: W0121 13:23:36.290221 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b055f38_5f56_4bb9_bfd6_25fb04003144.slice/crio-d92404bbea8768f6f21fcc81e807454800be2e152c789cdfcea1c246a9fe2bd3 WatchSource:0}: Error finding container d92404bbea8768f6f21fcc81e807454800be2e152c789cdfcea1c246a9fe2bd3: Status 404 returned error can't find the container with id d92404bbea8768f6f21fcc81e807454800be2e152c789cdfcea1c246a9fe2bd3 Jan 21 13:23:36 crc kubenswrapper[4959]: I0121 13:23:36.838120 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" event={"ID":"6dd40e28-d4df-4cae-b104-773876261939","Type":"ContainerStarted","Data":"74158cbd884bc348c1c40b9e5003a374435e324c5136d717f28c942eacdb4307"} Jan 21 13:23:36 crc kubenswrapper[4959]: I0121 13:23:36.839256 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" event={"ID":"6b055f38-5f56-4bb9-bfd6-25fb04003144","Type":"ContainerStarted","Data":"d92404bbea8768f6f21fcc81e807454800be2e152c789cdfcea1c246a9fe2bd3"} Jan 21 13:23:43 crc kubenswrapper[4959]: I0121 13:23:43.908504 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" event={"ID":"6dd40e28-d4df-4cae-b104-773876261939","Type":"ContainerStarted","Data":"157b48f0723746505e5764cac4a83186307b0651ca479ebe903a87df944af91b"} Jan 21 13:23:43 crc kubenswrapper[4959]: I0121 13:23:43.909914 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:23:43 crc kubenswrapper[4959]: I0121 13:23:43.910250 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" event={"ID":"6b055f38-5f56-4bb9-bfd6-25fb04003144","Type":"ContainerStarted","Data":"a3c12ea936e5e179da1b3990654618470ff7fd04255052c212dcf7775b511a7a"} Jan 21 13:23:43 crc kubenswrapper[4959]: I0121 13:23:43.910459 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:23:43 crc kubenswrapper[4959]: I0121 13:23:43.929378 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" podStartSLOduration=1.49861191 podStartE2EDuration="8.929365206s" podCreationTimestamp="2026-01-21 13:23:35 +0000 UTC" firstStartedPulling="2026-01-21 13:23:36.172753628 +0000 UTC m=+877.135784171" lastFinishedPulling="2026-01-21 13:23:43.603506924 +0000 UTC m=+884.566537467" observedRunningTime="2026-01-21 13:23:43.925816524 +0000 UTC m=+884.888847067" watchObservedRunningTime="2026-01-21 13:23:43.929365206 +0000 UTC m=+884.892395749" Jan 21 13:23:43 crc kubenswrapper[4959]: I0121 13:23:43.953056 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" podStartSLOduration=1.62783548 podStartE2EDuration="8.953034407s" podCreationTimestamp="2026-01-21 13:23:35 +0000 UTC" firstStartedPulling="2026-01-21 13:23:36.294187064 +0000 UTC m=+877.257217607" lastFinishedPulling="2026-01-21 13:23:43.619385991 +0000 UTC m=+884.582416534" observedRunningTime="2026-01-21 13:23:43.947858158 +0000 UTC m=+884.910888731" watchObservedRunningTime="2026-01-21 13:23:43.953034407 +0000 UTC m=+884.916064950" Jan 21 13:23:51 crc kubenswrapper[4959]: I0121 13:23:51.380167 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:23:51 crc kubenswrapper[4959]: I0121 13:23:51.380704 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:23:56 crc kubenswrapper[4959]: I0121 13:23:56.039349 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8489dff5dc-5dz6x" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.262001 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tlppj"] Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.263589 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.275978 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlppj"] Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.438907 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-catalog-content\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.439246 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5pzb\" (UniqueName: \"kubernetes.io/projected/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-kube-api-access-h5pzb\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.439366 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-utilities\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.540834 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-catalog-content\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.540891 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5pzb\" (UniqueName: \"kubernetes.io/projected/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-kube-api-access-h5pzb\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.540932 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-utilities\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.541471 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-catalog-content\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.541513 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-utilities\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.572128 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5pzb\" (UniqueName: \"kubernetes.io/projected/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-kube-api-access-h5pzb\") pod \"redhat-marketplace-tlppj\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:00 crc kubenswrapper[4959]: I0121 13:24:00.583502 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:01 crc kubenswrapper[4959]: I0121 13:24:01.237270 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlppj"] Jan 21 13:24:02 crc kubenswrapper[4959]: I0121 13:24:02.016224 4959 generic.go:334] "Generic (PLEG): container finished" podID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerID="973bf95899adc26e4f92eccf9aed3119ebf3896968a21a4d3a73e1cabb8dbf44" exitCode=0 Jan 21 13:24:02 crc kubenswrapper[4959]: I0121 13:24:02.016297 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlppj" event={"ID":"0cdf6659-7d62-4d13-a7cc-4c7d298d8729","Type":"ContainerDied","Data":"973bf95899adc26e4f92eccf9aed3119ebf3896968a21a4d3a73e1cabb8dbf44"} Jan 21 13:24:02 crc kubenswrapper[4959]: I0121 13:24:02.016495 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlppj" event={"ID":"0cdf6659-7d62-4d13-a7cc-4c7d298d8729","Type":"ContainerStarted","Data":"c2ec75c86bd0e348ed369037b9dfa8f33c6baa685a91c57a589f0530ffd0751a"} Jan 21 13:24:03 crc kubenswrapper[4959]: I0121 13:24:03.024772 4959 generic.go:334] "Generic (PLEG): container finished" podID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerID="389861d884ab39dd874539f1506779201fed172bce452e18ad5f10c4d8b2b0ef" exitCode=0 Jan 21 13:24:03 crc kubenswrapper[4959]: I0121 13:24:03.024873 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlppj" event={"ID":"0cdf6659-7d62-4d13-a7cc-4c7d298d8729","Type":"ContainerDied","Data":"389861d884ab39dd874539f1506779201fed172bce452e18ad5f10c4d8b2b0ef"} Jan 21 13:24:04 crc kubenswrapper[4959]: I0121 13:24:04.032284 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlppj" event={"ID":"0cdf6659-7d62-4d13-a7cc-4c7d298d8729","Type":"ContainerStarted","Data":"97c3ceb343d7bc4d5f35001a89ea4362f74045d8e29896d04caf6337f8db74a5"} Jan 21 13:24:04 crc kubenswrapper[4959]: I0121 13:24:04.053079 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tlppj" podStartSLOduration=2.640998525 podStartE2EDuration="4.053056907s" podCreationTimestamp="2026-01-21 13:24:00 +0000 UTC" firstStartedPulling="2026-01-21 13:24:02.018509151 +0000 UTC m=+902.981539694" lastFinishedPulling="2026-01-21 13:24:03.430567533 +0000 UTC m=+904.393598076" observedRunningTime="2026-01-21 13:24:04.049743907 +0000 UTC m=+905.012774450" watchObservedRunningTime="2026-01-21 13:24:04.053056907 +0000 UTC m=+905.016087460" Jan 21 13:24:10 crc kubenswrapper[4959]: I0121 13:24:10.583993 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:10 crc kubenswrapper[4959]: I0121 13:24:10.584301 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:10 crc kubenswrapper[4959]: I0121 13:24:10.636817 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:11 crc kubenswrapper[4959]: I0121 13:24:11.107741 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:11 crc kubenswrapper[4959]: I0121 13:24:11.149501 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlppj"] Jan 21 13:24:13 crc kubenswrapper[4959]: I0121 13:24:13.084501 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tlppj" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="registry-server" containerID="cri-o://97c3ceb343d7bc4d5f35001a89ea4362f74045d8e29896d04caf6337f8db74a5" gracePeriod=2 Jan 21 13:24:15 crc kubenswrapper[4959]: I0121 13:24:15.676436 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-55559dddc4-94kkl" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.103062 4959 generic.go:334] "Generic (PLEG): container finished" podID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerID="97c3ceb343d7bc4d5f35001a89ea4362f74045d8e29896d04caf6337f8db74a5" exitCode=0 Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.103149 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlppj" event={"ID":"0cdf6659-7d62-4d13-a7cc-4c7d298d8729","Type":"ContainerDied","Data":"97c3ceb343d7bc4d5f35001a89ea4362f74045d8e29896d04caf6337f8db74a5"} Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.405473 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb"] Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.406403 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.408539 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.408622 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-sp4mw" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.413302 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rwtbb"] Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.420315 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.421809 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.421958 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.432866 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb"] Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441018 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94rm6\" (UniqueName: \"kubernetes.io/projected/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-kube-api-access-94rm6\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441087 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-metrics-certs\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441195 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfnk\" (UniqueName: \"kubernetes.io/projected/8c501f4c-58de-43a4-80c2-5268f10bca20-kube-api-access-hrfnk\") pod \"frr-k8s-webhook-server-7df86c4f6c-wn2jb\" (UID: \"8c501f4c-58de-43a4-80c2-5268f10bca20\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441228 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-sockets\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441249 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c501f4c-58de-43a4-80c2-5268f10bca20-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wn2jb\" (UID: \"8c501f4c-58de-43a4-80c2-5268f10bca20\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441276 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-metrics\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441313 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-conf\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441346 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-reloader\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.441367 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-startup\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.495960 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hdghc"] Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.496958 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.499123 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-t2mlp" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.499136 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.499379 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.500440 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.520999 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-sfwv8"] Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.522113 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: W0121 13:24:16.526663 4959 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: secrets "controller-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 21 13:24:16 crc kubenswrapper[4959]: E0121 13:24:16.526723 4959 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"controller-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.542849 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-cert\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.542898 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.542916 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8d3c9089-9424-4aca-87fb-20992ea6ed12-metallb-excludel2\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.542944 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-metrics-certs\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.542973 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrfnk\" (UniqueName: \"kubernetes.io/projected/8c501f4c-58de-43a4-80c2-5268f10bca20-kube-api-access-hrfnk\") pod \"frr-k8s-webhook-server-7df86c4f6c-wn2jb\" (UID: \"8c501f4c-58de-43a4-80c2-5268f10bca20\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.543204 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-sockets\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.543246 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c501f4c-58de-43a4-80c2-5268f10bca20-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wn2jb\" (UID: \"8c501f4c-58de-43a4-80c2-5268f10bca20\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.543290 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-metrics\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.543323 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-conf\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.543401 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-reloader\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.543964 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-reloader\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544020 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-sockets\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544067 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-conf\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544037 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-startup\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544168 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-metrics\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544275 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-metrics-certs\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544323 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqwt\" (UniqueName: \"kubernetes.io/projected/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-kube-api-access-slqwt\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544364 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94rm6\" (UniqueName: \"kubernetes.io/projected/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-kube-api-access-94rm6\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544468 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvs9w\" (UniqueName: \"kubernetes.io/projected/8d3c9089-9424-4aca-87fb-20992ea6ed12-kube-api-access-qvs9w\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.544546 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-metrics-certs\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.545441 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-frr-startup\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.551829 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-metrics-certs\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.565531 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sfwv8"] Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.568787 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c501f4c-58de-43a4-80c2-5268f10bca20-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wn2jb\" (UID: \"8c501f4c-58de-43a4-80c2-5268f10bca20\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.594297 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrfnk\" (UniqueName: \"kubernetes.io/projected/8c501f4c-58de-43a4-80c2-5268f10bca20-kube-api-access-hrfnk\") pod \"frr-k8s-webhook-server-7df86c4f6c-wn2jb\" (UID: \"8c501f4c-58de-43a4-80c2-5268f10bca20\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.605556 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94rm6\" (UniqueName: \"kubernetes.io/projected/197db6ef-4bd0-4bf4-b9d8-c44565c03be6-kube-api-access-94rm6\") pod \"frr-k8s-rwtbb\" (UID: \"197db6ef-4bd0-4bf4-b9d8-c44565c03be6\") " pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.651811 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-metrics-certs\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.651893 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqwt\" (UniqueName: \"kubernetes.io/projected/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-kube-api-access-slqwt\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.651920 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvs9w\" (UniqueName: \"kubernetes.io/projected/8d3c9089-9424-4aca-87fb-20992ea6ed12-kube-api-access-qvs9w\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.651956 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-cert\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.651979 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.652002 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8d3c9089-9424-4aca-87fb-20992ea6ed12-metallb-excludel2\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.652039 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-metrics-certs\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: E0121 13:24:16.652268 4959 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 13:24:16 crc kubenswrapper[4959]: E0121 13:24:16.652332 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-metrics-certs podName:8d3c9089-9424-4aca-87fb-20992ea6ed12 nodeName:}" failed. No retries permitted until 2026-01-21 13:24:17.152310879 +0000 UTC m=+918.115341432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-metrics-certs") pod "speaker-hdghc" (UID: "8d3c9089-9424-4aca-87fb-20992ea6ed12") : secret "speaker-certs-secret" not found Jan 21 13:24:16 crc kubenswrapper[4959]: E0121 13:24:16.653018 4959 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 13:24:16 crc kubenswrapper[4959]: E0121 13:24:16.653056 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist podName:8d3c9089-9424-4aca-87fb-20992ea6ed12 nodeName:}" failed. No retries permitted until 2026-01-21 13:24:17.153046349 +0000 UTC m=+918.116076892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist") pod "speaker-hdghc" (UID: "8d3c9089-9424-4aca-87fb-20992ea6ed12") : secret "metallb-memberlist" not found Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.653862 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8d3c9089-9424-4aca-87fb-20992ea6ed12-metallb-excludel2\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.660459 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.667697 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-cert\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.679616 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqwt\" (UniqueName: \"kubernetes.io/projected/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-kube-api-access-slqwt\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.687363 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvs9w\" (UniqueName: \"kubernetes.io/projected/8d3c9089-9424-4aca-87fb-20992ea6ed12-kube-api-access-qvs9w\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.732302 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.743564 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:16 crc kubenswrapper[4959]: I0121 13:24:16.955385 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb"] Jan 21 13:24:16 crc kubenswrapper[4959]: W0121 13:24:16.972401 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c501f4c_58de_43a4_80c2_5268f10bca20.slice/crio-678b6de1721dcb03800057f42b9a5be63aa097462febe5db55e85dccb2ed936c WatchSource:0}: Error finding container 678b6de1721dcb03800057f42b9a5be63aa097462febe5db55e85dccb2ed936c: Status 404 returned error can't find the container with id 678b6de1721dcb03800057f42b9a5be63aa097462febe5db55e85dccb2ed936c Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.109407 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"9ea336a1ca8033156a30f64b02636dfe812dea57db491fe6b4c678fb28c171cc"} Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.110387 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" event={"ID":"8c501f4c-58de-43a4-80c2-5268f10bca20","Type":"ContainerStarted","Data":"678b6de1721dcb03800057f42b9a5be63aa097462febe5db55e85dccb2ed936c"} Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.167236 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-metrics-certs\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.167634 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:17 crc kubenswrapper[4959]: E0121 13:24:17.167778 4959 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 13:24:17 crc kubenswrapper[4959]: E0121 13:24:17.167848 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist podName:8d3c9089-9424-4aca-87fb-20992ea6ed12 nodeName:}" failed. No retries permitted until 2026-01-21 13:24:18.167830314 +0000 UTC m=+919.130860867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist") pod "speaker-hdghc" (UID: "8d3c9089-9424-4aca-87fb-20992ea6ed12") : secret "metallb-memberlist" not found Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.173990 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-metrics-certs\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.288739 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.369952 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5pzb\" (UniqueName: \"kubernetes.io/projected/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-kube-api-access-h5pzb\") pod \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.370123 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-catalog-content\") pod \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.370238 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-utilities\") pod \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\" (UID: \"0cdf6659-7d62-4d13-a7cc-4c7d298d8729\") " Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.371271 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-utilities" (OuterVolumeSpecName: "utilities") pod "0cdf6659-7d62-4d13-a7cc-4c7d298d8729" (UID: "0cdf6659-7d62-4d13-a7cc-4c7d298d8729"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.373632 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-kube-api-access-h5pzb" (OuterVolumeSpecName: "kube-api-access-h5pzb") pod "0cdf6659-7d62-4d13-a7cc-4c7d298d8729" (UID: "0cdf6659-7d62-4d13-a7cc-4c7d298d8729"). InnerVolumeSpecName "kube-api-access-h5pzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.390802 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cdf6659-7d62-4d13-a7cc-4c7d298d8729" (UID: "0cdf6659-7d62-4d13-a7cc-4c7d298d8729"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.472216 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.472258 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5pzb\" (UniqueName: \"kubernetes.io/projected/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-kube-api-access-h5pzb\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.472272 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdf6659-7d62-4d13-a7cc-4c7d298d8729-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4959]: E0121 13:24:17.653228 4959 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: failed to sync secret cache: timed out waiting for the condition Jan 21 13:24:17 crc kubenswrapper[4959]: E0121 13:24:17.653337 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-metrics-certs podName:40b8577e-ef7c-4aaa-abb5-fca4b4ea2173 nodeName:}" failed. No retries permitted until 2026-01-21 13:24:18.153317556 +0000 UTC m=+919.116348099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-metrics-certs") pod "controller-6968d8fdc4-sfwv8" (UID: "40b8577e-ef7c-4aaa-abb5-fca4b4ea2173") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:24:17 crc kubenswrapper[4959]: I0121 13:24:17.872246 4959 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.133387 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlppj" event={"ID":"0cdf6659-7d62-4d13-a7cc-4c7d298d8729","Type":"ContainerDied","Data":"c2ec75c86bd0e348ed369037b9dfa8f33c6baa685a91c57a589f0530ffd0751a"} Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.133721 4959 scope.go:117] "RemoveContainer" containerID="97c3ceb343d7bc4d5f35001a89ea4362f74045d8e29896d04caf6337f8db74a5" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.133488 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlppj" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.151716 4959 scope.go:117] "RemoveContainer" containerID="389861d884ab39dd874539f1506779201fed172bce452e18ad5f10c4d8b2b0ef" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.169598 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlppj"] Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.184805 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlppj"] Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.185545 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.185638 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-metrics-certs\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:18 crc kubenswrapper[4959]: E0121 13:24:18.187540 4959 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 13:24:18 crc kubenswrapper[4959]: E0121 13:24:18.187624 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist podName:8d3c9089-9424-4aca-87fb-20992ea6ed12 nodeName:}" failed. No retries permitted until 2026-01-21 13:24:20.18760093 +0000 UTC m=+921.150631513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist") pod "speaker-hdghc" (UID: "8d3c9089-9424-4aca-87fb-20992ea6ed12") : secret "metallb-memberlist" not found Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.192626 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40b8577e-ef7c-4aaa-abb5-fca4b4ea2173-metrics-certs\") pod \"controller-6968d8fdc4-sfwv8\" (UID: \"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173\") " pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.192971 4959 scope.go:117] "RemoveContainer" containerID="973bf95899adc26e4f92eccf9aed3119ebf3896968a21a4d3a73e1cabb8dbf44" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.338316 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:18 crc kubenswrapper[4959]: I0121 13:24:18.777398 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sfwv8"] Jan 21 13:24:19 crc kubenswrapper[4959]: I0121 13:24:19.146169 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sfwv8" event={"ID":"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173","Type":"ContainerStarted","Data":"d63d754f8744627ab5ff937ccab606dddca9922e24fce93eec71c83ffb331d31"} Jan 21 13:24:19 crc kubenswrapper[4959]: I0121 13:24:19.146213 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sfwv8" event={"ID":"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173","Type":"ContainerStarted","Data":"05f1cd77853c046ff657b2782c72aa5606a3f71f11e23197dd19d9c35569001a"} Jan 21 13:24:19 crc kubenswrapper[4959]: I0121 13:24:19.294227 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" path="/var/lib/kubelet/pods/0cdf6659-7d62-4d13-a7cc-4c7d298d8729/volumes" Jan 21 13:24:20 crc kubenswrapper[4959]: I0121 13:24:20.157485 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sfwv8" event={"ID":"40b8577e-ef7c-4aaa-abb5-fca4b4ea2173","Type":"ContainerStarted","Data":"a2a7e9223712c8d95a04cc2498baffa48d4ed5cbccaac746d8cf46a36065a88b"} Jan 21 13:24:20 crc kubenswrapper[4959]: I0121 13:24:20.157646 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:20 crc kubenswrapper[4959]: I0121 13:24:20.211864 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:20 crc kubenswrapper[4959]: I0121 13:24:20.229020 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8d3c9089-9424-4aca-87fb-20992ea6ed12-memberlist\") pod \"speaker-hdghc\" (UID: \"8d3c9089-9424-4aca-87fb-20992ea6ed12\") " pod="metallb-system/speaker-hdghc" Jan 21 13:24:20 crc kubenswrapper[4959]: I0121 13:24:20.412410 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hdghc" Jan 21 13:24:20 crc kubenswrapper[4959]: W0121 13:24:20.469808 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d3c9089_9424_4aca_87fb_20992ea6ed12.slice/crio-896012ff7e861bddc66d47983fca2302f60f6d4775e9ed8a8f63e2a7c6d8a0dd WatchSource:0}: Error finding container 896012ff7e861bddc66d47983fca2302f60f6d4775e9ed8a8f63e2a7c6d8a0dd: Status 404 returned error can't find the container with id 896012ff7e861bddc66d47983fca2302f60f6d4775e9ed8a8f63e2a7c6d8a0dd Jan 21 13:24:21 crc kubenswrapper[4959]: I0121 13:24:21.181458 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hdghc" event={"ID":"8d3c9089-9424-4aca-87fb-20992ea6ed12","Type":"ContainerStarted","Data":"2259ee21c5c07ecf39a0e293b9f7cbb5298ee7c73006b22a952278a4e95b4127"} Jan 21 13:24:21 crc kubenswrapper[4959]: I0121 13:24:21.181532 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hdghc" event={"ID":"8d3c9089-9424-4aca-87fb-20992ea6ed12","Type":"ContainerStarted","Data":"896012ff7e861bddc66d47983fca2302f60f6d4775e9ed8a8f63e2a7c6d8a0dd"} Jan 21 13:24:21 crc kubenswrapper[4959]: I0121 13:24:21.380031 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:24:21 crc kubenswrapper[4959]: I0121 13:24:21.380125 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:24:22 crc kubenswrapper[4959]: I0121 13:24:22.199056 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hdghc" event={"ID":"8d3c9089-9424-4aca-87fb-20992ea6ed12","Type":"ContainerStarted","Data":"04f8f58139cb38583027c881d70f64f43cd05a1af281ad7bd6644a479163ce19"} Jan 21 13:24:22 crc kubenswrapper[4959]: I0121 13:24:22.199678 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hdghc" Jan 21 13:24:22 crc kubenswrapper[4959]: I0121 13:24:22.227942 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hdghc" podStartSLOduration=6.227915929 podStartE2EDuration="6.227915929s" podCreationTimestamp="2026-01-21 13:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:22.225020681 +0000 UTC m=+923.188051244" watchObservedRunningTime="2026-01-21 13:24:22.227915929 +0000 UTC m=+923.190946472" Jan 21 13:24:22 crc kubenswrapper[4959]: I0121 13:24:22.232294 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-sfwv8" podStartSLOduration=6.232279618 podStartE2EDuration="6.232279618s" podCreationTimestamp="2026-01-21 13:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:20.187201918 +0000 UTC m=+921.150232481" watchObservedRunningTime="2026-01-21 13:24:22.232279618 +0000 UTC m=+923.195310171" Jan 21 13:24:26 crc kubenswrapper[4959]: I0121 13:24:26.220777 4959 generic.go:334] "Generic (PLEG): container finished" podID="197db6ef-4bd0-4bf4-b9d8-c44565c03be6" containerID="f744bd62a9e6e1ae5df74c2e40ff586d3ed3bd53c4b8253e24769f00bda68d8c" exitCode=0 Jan 21 13:24:26 crc kubenswrapper[4959]: I0121 13:24:26.220834 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerDied","Data":"f744bd62a9e6e1ae5df74c2e40ff586d3ed3bd53c4b8253e24769f00bda68d8c"} Jan 21 13:24:26 crc kubenswrapper[4959]: I0121 13:24:26.224000 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" event={"ID":"8c501f4c-58de-43a4-80c2-5268f10bca20","Type":"ContainerStarted","Data":"6e2a8d3e6add10af41c79e1b8322ef189917c41c2a5e3967c8f69844bafbea93"} Jan 21 13:24:26 crc kubenswrapper[4959]: I0121 13:24:26.224221 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:26 crc kubenswrapper[4959]: I0121 13:24:26.263366 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" podStartSLOduration=1.999140393 podStartE2EDuration="10.263341848s" podCreationTimestamp="2026-01-21 13:24:16 +0000 UTC" firstStartedPulling="2026-01-21 13:24:16.975951436 +0000 UTC m=+917.938981979" lastFinishedPulling="2026-01-21 13:24:25.240152891 +0000 UTC m=+926.203183434" observedRunningTime="2026-01-21 13:24:26.2626925 +0000 UTC m=+927.225723063" watchObservedRunningTime="2026-01-21 13:24:26.263341848 +0000 UTC m=+927.226372411" Jan 21 13:24:27 crc kubenswrapper[4959]: I0121 13:24:27.230404 4959 generic.go:334] "Generic (PLEG): container finished" podID="197db6ef-4bd0-4bf4-b9d8-c44565c03be6" containerID="79addad4aaf59da902835fa546a10e76be4b7f7a46e408490d64cc1f1b8a82bc" exitCode=0 Jan 21 13:24:27 crc kubenswrapper[4959]: I0121 13:24:27.230470 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerDied","Data":"79addad4aaf59da902835fa546a10e76be4b7f7a46e408490d64cc1f1b8a82bc"} Jan 21 13:24:28 crc kubenswrapper[4959]: I0121 13:24:28.237866 4959 generic.go:334] "Generic (PLEG): container finished" podID="197db6ef-4bd0-4bf4-b9d8-c44565c03be6" containerID="571acd69f46fe6a47779d163b660d68d274f7cd5fed08abb48c67c2d69942440" exitCode=0 Jan 21 13:24:28 crc kubenswrapper[4959]: I0121 13:24:28.237909 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerDied","Data":"571acd69f46fe6a47779d163b660d68d274f7cd5fed08abb48c67c2d69942440"} Jan 21 13:24:29 crc kubenswrapper[4959]: I0121 13:24:29.248483 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"f3ab0d3083f855db809021c4ad4953bcb32e34a555c2fb38b02adac46c79d7e0"} Jan 21 13:24:29 crc kubenswrapper[4959]: I0121 13:24:29.248829 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"46b34a23d7d0c738e4b19582dd3cfb93cd91a21835508be3fc59e962911a03ef"} Jan 21 13:24:29 crc kubenswrapper[4959]: I0121 13:24:29.248843 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"d2305fb9feee015db32b968460e2dd2ebfbaf30f6fc44c9873d4a36493d59f57"} Jan 21 13:24:29 crc kubenswrapper[4959]: I0121 13:24:29.248855 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"f2d57ec18a117b500b793d9858a0a86ecb16f0621fb22cd530bb3fed137229da"} Jan 21 13:24:29 crc kubenswrapper[4959]: I0121 13:24:29.248865 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"903ef70e57fff7e5e82184921a28f6bac2437e60dbf90dfd12258aaecb5b717c"} Jan 21 13:24:30 crc kubenswrapper[4959]: I0121 13:24:30.257550 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rwtbb" event={"ID":"197db6ef-4bd0-4bf4-b9d8-c44565c03be6","Type":"ContainerStarted","Data":"ffd783aa9027d2e8b0064f6baa28da87e894f5473682cbe9e9124b0c8ded3a82"} Jan 21 13:24:30 crc kubenswrapper[4959]: I0121 13:24:30.258261 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:30 crc kubenswrapper[4959]: I0121 13:24:30.282914 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rwtbb" podStartSLOduration=5.889123172 podStartE2EDuration="14.282896367s" podCreationTimestamp="2026-01-21 13:24:16 +0000 UTC" firstStartedPulling="2026-01-21 13:24:16.879636877 +0000 UTC m=+917.842667420" lastFinishedPulling="2026-01-21 13:24:25.273410072 +0000 UTC m=+926.236440615" observedRunningTime="2026-01-21 13:24:30.279204627 +0000 UTC m=+931.242235190" watchObservedRunningTime="2026-01-21 13:24:30.282896367 +0000 UTC m=+931.245926910" Jan 21 13:24:30 crc kubenswrapper[4959]: I0121 13:24:30.417160 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hdghc" Jan 21 13:24:31 crc kubenswrapper[4959]: I0121 13:24:31.744070 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:31 crc kubenswrapper[4959]: I0121 13:24:31.783804 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.258158 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bfvkv"] Jan 21 13:24:33 crc kubenswrapper[4959]: E0121 13:24:33.258668 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="registry-server" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.258683 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="registry-server" Jan 21 13:24:33 crc kubenswrapper[4959]: E0121 13:24:33.258702 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="extract-content" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.258707 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="extract-content" Jan 21 13:24:33 crc kubenswrapper[4959]: E0121 13:24:33.258719 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="extract-utilities" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.258725 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="extract-utilities" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.258833 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdf6659-7d62-4d13-a7cc-4c7d298d8729" containerName="registry-server" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.259220 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.263883 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.263946 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-n86dz" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.265237 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.268690 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bfvkv"] Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.403916 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmln\" (UniqueName: \"kubernetes.io/projected/ad20eb21-3645-4e36-8230-d2104175658a-kube-api-access-9qmln\") pod \"openstack-operator-index-bfvkv\" (UID: \"ad20eb21-3645-4e36-8230-d2104175658a\") " pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.505134 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmln\" (UniqueName: \"kubernetes.io/projected/ad20eb21-3645-4e36-8230-d2104175658a-kube-api-access-9qmln\") pod \"openstack-operator-index-bfvkv\" (UID: \"ad20eb21-3645-4e36-8230-d2104175658a\") " pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.522800 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmln\" (UniqueName: \"kubernetes.io/projected/ad20eb21-3645-4e36-8230-d2104175658a-kube-api-access-9qmln\") pod \"openstack-operator-index-bfvkv\" (UID: \"ad20eb21-3645-4e36-8230-d2104175658a\") " pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.587078 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:33 crc kubenswrapper[4959]: I0121 13:24:33.853193 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bfvkv"] Jan 21 13:24:34 crc kubenswrapper[4959]: I0121 13:24:34.280892 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfvkv" event={"ID":"ad20eb21-3645-4e36-8230-d2104175658a","Type":"ContainerStarted","Data":"5f126cb565d1c5c5635db7168788db8ae10ac67725553237c188b3e844a3d2ad"} Jan 21 13:24:36 crc kubenswrapper[4959]: I0121 13:24:36.778805 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wn2jb" Jan 21 13:24:36 crc kubenswrapper[4959]: I0121 13:24:36.834372 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bfvkv"] Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.640838 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v5rrt"] Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.641542 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.649724 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v5rrt"] Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.759556 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjc4\" (UniqueName: \"kubernetes.io/projected/a402b706-070e-44a8-b298-231e0e20af75-kube-api-access-hxjc4\") pod \"openstack-operator-index-v5rrt\" (UID: \"a402b706-070e-44a8-b298-231e0e20af75\") " pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.861153 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjc4\" (UniqueName: \"kubernetes.io/projected/a402b706-070e-44a8-b298-231e0e20af75-kube-api-access-hxjc4\") pod \"openstack-operator-index-v5rrt\" (UID: \"a402b706-070e-44a8-b298-231e0e20af75\") " pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.881168 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjc4\" (UniqueName: \"kubernetes.io/projected/a402b706-070e-44a8-b298-231e0e20af75-kube-api-access-hxjc4\") pod \"openstack-operator-index-v5rrt\" (UID: \"a402b706-070e-44a8-b298-231e0e20af75\") " pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:37 crc kubenswrapper[4959]: I0121 13:24:37.983872 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:38 crc kubenswrapper[4959]: I0121 13:24:38.342757 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-sfwv8" Jan 21 13:24:38 crc kubenswrapper[4959]: I0121 13:24:38.470103 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v5rrt"] Jan 21 13:24:38 crc kubenswrapper[4959]: W0121 13:24:38.477470 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda402b706_070e_44a8_b298_231e0e20af75.slice/crio-75536e532d67cc62ae362842ab2d3485d70db6109c605140e3d798ede9a39cee WatchSource:0}: Error finding container 75536e532d67cc62ae362842ab2d3485d70db6109c605140e3d798ede9a39cee: Status 404 returned error can't find the container with id 75536e532d67cc62ae362842ab2d3485d70db6109c605140e3d798ede9a39cee Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.322553 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v5rrt" event={"ID":"a402b706-070e-44a8-b298-231e0e20af75","Type":"ContainerStarted","Data":"75536e532d67cc62ae362842ab2d3485d70db6109c605140e3d798ede9a39cee"} Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.649310 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6xklv"] Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.651872 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.661517 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xklv"] Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.786623 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chsr\" (UniqueName: \"kubernetes.io/projected/60528a9f-0167-4eb7-8589-09a955093c80-kube-api-access-2chsr\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.786704 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-catalog-content\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.786772 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-utilities\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.888426 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-utilities\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.888523 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2chsr\" (UniqueName: \"kubernetes.io/projected/60528a9f-0167-4eb7-8589-09a955093c80-kube-api-access-2chsr\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.888548 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-catalog-content\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.889101 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-utilities\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.889139 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-catalog-content\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.908103 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2chsr\" (UniqueName: \"kubernetes.io/projected/60528a9f-0167-4eb7-8589-09a955093c80-kube-api-access-2chsr\") pod \"community-operators-6xklv\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:39 crc kubenswrapper[4959]: I0121 13:24:39.975370 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:40 crc kubenswrapper[4959]: I0121 13:24:40.449251 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xklv"] Jan 21 13:24:40 crc kubenswrapper[4959]: W0121 13:24:40.460258 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60528a9f_0167_4eb7_8589_09a955093c80.slice/crio-dca4b6a832ad847c3d789a1b1e62fc516b4a7330f14fd45b2b3bd5664a692d06 WatchSource:0}: Error finding container dca4b6a832ad847c3d789a1b1e62fc516b4a7330f14fd45b2b3bd5664a692d06: Status 404 returned error can't find the container with id dca4b6a832ad847c3d789a1b1e62fc516b4a7330f14fd45b2b3bd5664a692d06 Jan 21 13:24:41 crc kubenswrapper[4959]: I0121 13:24:41.335610 4959 generic.go:334] "Generic (PLEG): container finished" podID="60528a9f-0167-4eb7-8589-09a955093c80" containerID="1b0a84b3a9ecee9bd250a78151a604d6bb079413ea2b7be9b01115fa0190165f" exitCode=0 Jan 21 13:24:41 crc kubenswrapper[4959]: I0121 13:24:41.335721 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xklv" event={"ID":"60528a9f-0167-4eb7-8589-09a955093c80","Type":"ContainerDied","Data":"1b0a84b3a9ecee9bd250a78151a604d6bb079413ea2b7be9b01115fa0190165f"} Jan 21 13:24:41 crc kubenswrapper[4959]: I0121 13:24:41.335952 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xklv" event={"ID":"60528a9f-0167-4eb7-8589-09a955093c80","Type":"ContainerStarted","Data":"dca4b6a832ad847c3d789a1b1e62fc516b4a7330f14fd45b2b3bd5664a692d06"} Jan 21 13:24:46 crc kubenswrapper[4959]: I0121 13:24:46.375279 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfvkv" event={"ID":"ad20eb21-3645-4e36-8230-d2104175658a","Type":"ContainerStarted","Data":"49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c"} Jan 21 13:24:46 crc kubenswrapper[4959]: I0121 13:24:46.375582 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bfvkv" podUID="ad20eb21-3645-4e36-8230-d2104175658a" containerName="registry-server" containerID="cri-o://49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c" gracePeriod=2 Jan 21 13:24:46 crc kubenswrapper[4959]: I0121 13:24:46.397005 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bfvkv" podStartSLOduration=1.11358908 podStartE2EDuration="13.396987024s" podCreationTimestamp="2026-01-21 13:24:33 +0000 UTC" firstStartedPulling="2026-01-21 13:24:33.859437665 +0000 UTC m=+934.822468208" lastFinishedPulling="2026-01-21 13:24:46.142835609 +0000 UTC m=+947.105866152" observedRunningTime="2026-01-21 13:24:46.39240468 +0000 UTC m=+947.355435223" watchObservedRunningTime="2026-01-21 13:24:46.396987024 +0000 UTC m=+947.360017567" Jan 21 13:24:46 crc kubenswrapper[4959]: I0121 13:24:46.746823 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rwtbb" Jan 21 13:24:46 crc kubenswrapper[4959]: I0121 13:24:46.970023 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.100431 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmln\" (UniqueName: \"kubernetes.io/projected/ad20eb21-3645-4e36-8230-d2104175658a-kube-api-access-9qmln\") pod \"ad20eb21-3645-4e36-8230-d2104175658a\" (UID: \"ad20eb21-3645-4e36-8230-d2104175658a\") " Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.106569 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad20eb21-3645-4e36-8230-d2104175658a-kube-api-access-9qmln" (OuterVolumeSpecName: "kube-api-access-9qmln") pod "ad20eb21-3645-4e36-8230-d2104175658a" (UID: "ad20eb21-3645-4e36-8230-d2104175658a"). InnerVolumeSpecName "kube-api-access-9qmln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.202344 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qmln\" (UniqueName: \"kubernetes.io/projected/ad20eb21-3645-4e36-8230-d2104175658a-kube-api-access-9qmln\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.382617 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v5rrt" event={"ID":"a402b706-070e-44a8-b298-231e0e20af75","Type":"ContainerStarted","Data":"f68c228bbb86d98a61ebd875c3bc64238f85ab97593f5552601258552db34362"} Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.384515 4959 generic.go:334] "Generic (PLEG): container finished" podID="ad20eb21-3645-4e36-8230-d2104175658a" containerID="49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c" exitCode=0 Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.384575 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfvkv" event={"ID":"ad20eb21-3645-4e36-8230-d2104175658a","Type":"ContainerDied","Data":"49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c"} Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.384598 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfvkv" event={"ID":"ad20eb21-3645-4e36-8230-d2104175658a","Type":"ContainerDied","Data":"5f126cb565d1c5c5635db7168788db8ae10ac67725553237c188b3e844a3d2ad"} Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.384616 4959 scope.go:117] "RemoveContainer" containerID="49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.384715 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfvkv" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.391270 4959 generic.go:334] "Generic (PLEG): container finished" podID="60528a9f-0167-4eb7-8589-09a955093c80" containerID="f5cbf9e3703268b4a48084011bdc1b7e377a4de0adb8d045f861f4a5ab2f575e" exitCode=0 Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.391328 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xklv" event={"ID":"60528a9f-0167-4eb7-8589-09a955093c80","Type":"ContainerDied","Data":"f5cbf9e3703268b4a48084011bdc1b7e377a4de0adb8d045f861f4a5ab2f575e"} Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.409684 4959 scope.go:117] "RemoveContainer" containerID="49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c" Jan 21 13:24:47 crc kubenswrapper[4959]: E0121 13:24:47.410172 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c\": container with ID starting with 49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c not found: ID does not exist" containerID="49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.410274 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c"} err="failed to get container status \"49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c\": rpc error: code = NotFound desc = could not find container \"49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c\": container with ID starting with 49175ec8b19c797a100d9bb440e27841b52965c3189cf09d0168d68098314a0c not found: ID does not exist" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.414397 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v5rrt" podStartSLOduration=2.658255043 podStartE2EDuration="10.414376224s" podCreationTimestamp="2026-01-21 13:24:37 +0000 UTC" firstStartedPulling="2026-01-21 13:24:38.47981799 +0000 UTC m=+939.442848533" lastFinishedPulling="2026-01-21 13:24:46.235939171 +0000 UTC m=+947.198969714" observedRunningTime="2026-01-21 13:24:47.40281349 +0000 UTC m=+948.365844053" watchObservedRunningTime="2026-01-21 13:24:47.414376224 +0000 UTC m=+948.377406767" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.423543 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bfvkv"] Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.429904 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bfvkv"] Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.984422 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:47 crc kubenswrapper[4959]: I0121 13:24:47.984781 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:48 crc kubenswrapper[4959]: I0121 13:24:48.017594 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:48 crc kubenswrapper[4959]: I0121 13:24:48.400711 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xklv" event={"ID":"60528a9f-0167-4eb7-8589-09a955093c80","Type":"ContainerStarted","Data":"d6a9a7ab95773fe037c3f84463490adb3aafd693ac0c59e95d9524f28031d1ef"} Jan 21 13:24:48 crc kubenswrapper[4959]: I0121 13:24:48.417332 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6xklv" podStartSLOduration=2.827875587 podStartE2EDuration="9.417314693s" podCreationTimestamp="2026-01-21 13:24:39 +0000 UTC" firstStartedPulling="2026-01-21 13:24:41.337871473 +0000 UTC m=+942.300902016" lastFinishedPulling="2026-01-21 13:24:47.927310579 +0000 UTC m=+948.890341122" observedRunningTime="2026-01-21 13:24:48.415937676 +0000 UTC m=+949.378968239" watchObservedRunningTime="2026-01-21 13:24:48.417314693 +0000 UTC m=+949.380345236" Jan 21 13:24:49 crc kubenswrapper[4959]: I0121 13:24:49.321564 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad20eb21-3645-4e36-8230-d2104175658a" path="/var/lib/kubelet/pods/ad20eb21-3645-4e36-8230-d2104175658a/volumes" Jan 21 13:24:49 crc kubenswrapper[4959]: I0121 13:24:49.975843 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:49 crc kubenswrapper[4959]: I0121 13:24:49.975901 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:50 crc kubenswrapper[4959]: I0121 13:24:50.022171 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:24:51 crc kubenswrapper[4959]: I0121 13:24:51.379781 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:24:51 crc kubenswrapper[4959]: I0121 13:24:51.380076 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:24:51 crc kubenswrapper[4959]: I0121 13:24:51.380181 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:24:51 crc kubenswrapper[4959]: I0121 13:24:51.380763 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"843c9a535cea21503639885bda8c5e42d1482db615844b1ac00c900cdaba0bca"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:24:51 crc kubenswrapper[4959]: I0121 13:24:51.380819 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://843c9a535cea21503639885bda8c5e42d1482db615844b1ac00c900cdaba0bca" gracePeriod=600 Jan 21 13:24:52 crc kubenswrapper[4959]: I0121 13:24:52.427812 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="843c9a535cea21503639885bda8c5e42d1482db615844b1ac00c900cdaba0bca" exitCode=0 Jan 21 13:24:52 crc kubenswrapper[4959]: I0121 13:24:52.427955 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"843c9a535cea21503639885bda8c5e42d1482db615844b1ac00c900cdaba0bca"} Jan 21 13:24:52 crc kubenswrapper[4959]: I0121 13:24:52.428407 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"d241f95bdad8e099eb04c705c02b5632d266875f065692682f2eadc1b6776be6"} Jan 21 13:24:52 crc kubenswrapper[4959]: I0121 13:24:52.428430 4959 scope.go:117] "RemoveContainer" containerID="b802840efc0a2a43f88d6b69a868dc35f4fb5bac7bce20e288e99506f21a88de" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.081493 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v5rrt" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.439773 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5md62"] Jan 21 13:24:58 crc kubenswrapper[4959]: E0121 13:24:58.440044 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad20eb21-3645-4e36-8230-d2104175658a" containerName="registry-server" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.440066 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad20eb21-3645-4e36-8230-d2104175658a" containerName="registry-server" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.440234 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad20eb21-3645-4e36-8230-d2104175658a" containerName="registry-server" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.441054 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.455350 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5md62"] Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.535054 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-catalog-content\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.535604 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-utilities\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.535745 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldrck\" (UniqueName: \"kubernetes.io/projected/fabbd2b7-32be-4881-ad20-1983b48afe8e-kube-api-access-ldrck\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.636357 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-utilities\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.636947 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldrck\" (UniqueName: \"kubernetes.io/projected/fabbd2b7-32be-4881-ad20-1983b48afe8e-kube-api-access-ldrck\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.637323 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-catalog-content\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.636886 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-utilities\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.637580 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-catalog-content\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.670372 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldrck\" (UniqueName: \"kubernetes.io/projected/fabbd2b7-32be-4881-ad20-1983b48afe8e-kube-api-access-ldrck\") pod \"certified-operators-5md62\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:58 crc kubenswrapper[4959]: I0121 13:24:58.766467 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:24:59 crc kubenswrapper[4959]: I0121 13:24:59.275193 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5md62"] Jan 21 13:24:59 crc kubenswrapper[4959]: I0121 13:24:59.471256 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerStarted","Data":"06ac389719f7d14804f71cae6f94f969040d34ca387daac5f34fae407e65e404"} Jan 21 13:25:00 crc kubenswrapper[4959]: I0121 13:25:00.029675 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:25:00 crc kubenswrapper[4959]: I0121 13:25:00.478143 4959 generic.go:334] "Generic (PLEG): container finished" podID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerID="4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed" exitCode=0 Jan 21 13:25:00 crc kubenswrapper[4959]: I0121 13:25:00.478200 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerDied","Data":"4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed"} Jan 21 13:25:02 crc kubenswrapper[4959]: I0121 13:25:02.499760 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerStarted","Data":"a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0"} Jan 21 13:25:03 crc kubenswrapper[4959]: I0121 13:25:03.233726 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xklv"] Jan 21 13:25:03 crc kubenswrapper[4959]: I0121 13:25:03.233980 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6xklv" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="registry-server" containerID="cri-o://d6a9a7ab95773fe037c3f84463490adb3aafd693ac0c59e95d9524f28031d1ef" gracePeriod=2 Jan 21 13:25:03 crc kubenswrapper[4959]: I0121 13:25:03.509241 4959 generic.go:334] "Generic (PLEG): container finished" podID="60528a9f-0167-4eb7-8589-09a955093c80" containerID="d6a9a7ab95773fe037c3f84463490adb3aafd693ac0c59e95d9524f28031d1ef" exitCode=0 Jan 21 13:25:03 crc kubenswrapper[4959]: I0121 13:25:03.509342 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xklv" event={"ID":"60528a9f-0167-4eb7-8589-09a955093c80","Type":"ContainerDied","Data":"d6a9a7ab95773fe037c3f84463490adb3aafd693ac0c59e95d9524f28031d1ef"} Jan 21 13:25:03 crc kubenswrapper[4959]: I0121 13:25:03.511688 4959 generic.go:334] "Generic (PLEG): container finished" podID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerID="a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0" exitCode=0 Jan 21 13:25:03 crc kubenswrapper[4959]: I0121 13:25:03.511782 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerDied","Data":"a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0"} Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.140756 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.317833 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2chsr\" (UniqueName: \"kubernetes.io/projected/60528a9f-0167-4eb7-8589-09a955093c80-kube-api-access-2chsr\") pod \"60528a9f-0167-4eb7-8589-09a955093c80\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.318267 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-catalog-content\") pod \"60528a9f-0167-4eb7-8589-09a955093c80\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.318304 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-utilities\") pod \"60528a9f-0167-4eb7-8589-09a955093c80\" (UID: \"60528a9f-0167-4eb7-8589-09a955093c80\") " Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.320157 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-utilities" (OuterVolumeSpecName: "utilities") pod "60528a9f-0167-4eb7-8589-09a955093c80" (UID: "60528a9f-0167-4eb7-8589-09a955093c80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.323371 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60528a9f-0167-4eb7-8589-09a955093c80-kube-api-access-2chsr" (OuterVolumeSpecName: "kube-api-access-2chsr") pod "60528a9f-0167-4eb7-8589-09a955093c80" (UID: "60528a9f-0167-4eb7-8589-09a955093c80"). InnerVolumeSpecName "kube-api-access-2chsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.371021 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60528a9f-0167-4eb7-8589-09a955093c80" (UID: "60528a9f-0167-4eb7-8589-09a955093c80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.419568 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2chsr\" (UniqueName: \"kubernetes.io/projected/60528a9f-0167-4eb7-8589-09a955093c80-kube-api-access-2chsr\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.419616 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.419632 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60528a9f-0167-4eb7-8589-09a955093c80-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.520156 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xklv" event={"ID":"60528a9f-0167-4eb7-8589-09a955093c80","Type":"ContainerDied","Data":"dca4b6a832ad847c3d789a1b1e62fc516b4a7330f14fd45b2b3bd5664a692d06"} Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.520207 4959 scope.go:117] "RemoveContainer" containerID="d6a9a7ab95773fe037c3f84463490adb3aafd693ac0c59e95d9524f28031d1ef" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.520313 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xklv" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.534303 4959 scope.go:117] "RemoveContainer" containerID="f5cbf9e3703268b4a48084011bdc1b7e377a4de0adb8d045f861f4a5ab2f575e" Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.550161 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xklv"] Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.553482 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6xklv"] Jan 21 13:25:04 crc kubenswrapper[4959]: I0121 13:25:04.555187 4959 scope.go:117] "RemoveContainer" containerID="1b0a84b3a9ecee9bd250a78151a604d6bb079413ea2b7be9b01115fa0190165f" Jan 21 13:25:05 crc kubenswrapper[4959]: I0121 13:25:05.292611 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60528a9f-0167-4eb7-8589-09a955093c80" path="/var/lib/kubelet/pods/60528a9f-0167-4eb7-8589-09a955093c80/volumes" Jan 21 13:25:05 crc kubenswrapper[4959]: I0121 13:25:05.530022 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerStarted","Data":"a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404"} Jan 21 13:25:05 crc kubenswrapper[4959]: I0121 13:25:05.552663 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5md62" podStartSLOduration=3.836544438 podStartE2EDuration="7.552642197s" podCreationTimestamp="2026-01-21 13:24:58 +0000 UTC" firstStartedPulling="2026-01-21 13:25:00.479704263 +0000 UTC m=+961.442734806" lastFinishedPulling="2026-01-21 13:25:04.195802022 +0000 UTC m=+965.158832565" observedRunningTime="2026-01-21 13:25:05.54683754 +0000 UTC m=+966.509868103" watchObservedRunningTime="2026-01-21 13:25:05.552642197 +0000 UTC m=+966.515672740" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.276355 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4"] Jan 21 13:25:08 crc kubenswrapper[4959]: E0121 13:25:08.276843 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="extract-content" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.276856 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="extract-content" Jan 21 13:25:08 crc kubenswrapper[4959]: E0121 13:25:08.276870 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="extract-utilities" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.276876 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="extract-utilities" Jan 21 13:25:08 crc kubenswrapper[4959]: E0121 13:25:08.276889 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="registry-server" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.276895 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="registry-server" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.276990 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="60528a9f-0167-4eb7-8589-09a955093c80" containerName="registry-server" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.277868 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.281163 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pk9pv" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.284564 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4"] Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.369652 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7ksg\" (UniqueName: \"kubernetes.io/projected/378bcded-d2db-4b72-bcdf-170b163dcdc4-kube-api-access-w7ksg\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.369723 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-util\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.369801 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-bundle\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.470646 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-bundle\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.470743 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7ksg\" (UniqueName: \"kubernetes.io/projected/378bcded-d2db-4b72-bcdf-170b163dcdc4-kube-api-access-w7ksg\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.470782 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-util\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.471329 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-bundle\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.471550 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-util\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.490674 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7ksg\" (UniqueName: \"kubernetes.io/projected/378bcded-d2db-4b72-bcdf-170b163dcdc4-kube-api-access-w7ksg\") pod \"0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.598614 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.766772 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.767115 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.815672 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4"] Jan 21 13:25:08 crc kubenswrapper[4959]: I0121 13:25:08.819445 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:25:09 crc kubenswrapper[4959]: I0121 13:25:09.558190 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" event={"ID":"378bcded-d2db-4b72-bcdf-170b163dcdc4","Type":"ContainerStarted","Data":"38140305272a4e61247d468c0ef52cd5233fc04c917ccc22f7aded2575f52435"} Jan 21 13:25:09 crc kubenswrapper[4959]: I0121 13:25:09.596249 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:25:10 crc kubenswrapper[4959]: I0121 13:25:10.235042 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5md62"] Jan 21 13:25:10 crc kubenswrapper[4959]: I0121 13:25:10.565007 4959 generic.go:334] "Generic (PLEG): container finished" podID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerID="04cb82d88dff3b93e54222bbdef7c6d47edfdab223fa34f95f2c9cebd8a86ec0" exitCode=0 Jan 21 13:25:10 crc kubenswrapper[4959]: I0121 13:25:10.565129 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" event={"ID":"378bcded-d2db-4b72-bcdf-170b163dcdc4","Type":"ContainerDied","Data":"04cb82d88dff3b93e54222bbdef7c6d47edfdab223fa34f95f2c9cebd8a86ec0"} Jan 21 13:25:11 crc kubenswrapper[4959]: I0121 13:25:11.572184 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5md62" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="registry-server" containerID="cri-o://a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404" gracePeriod=2 Jan 21 13:25:11 crc kubenswrapper[4959]: I0121 13:25:11.986497 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.025642 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-utilities\") pod \"fabbd2b7-32be-4881-ad20-1983b48afe8e\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.025916 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-catalog-content\") pod \"fabbd2b7-32be-4881-ad20-1983b48afe8e\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.026168 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldrck\" (UniqueName: \"kubernetes.io/projected/fabbd2b7-32be-4881-ad20-1983b48afe8e-kube-api-access-ldrck\") pod \"fabbd2b7-32be-4881-ad20-1983b48afe8e\" (UID: \"fabbd2b7-32be-4881-ad20-1983b48afe8e\") " Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.030251 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-utilities" (OuterVolumeSpecName: "utilities") pod "fabbd2b7-32be-4881-ad20-1983b48afe8e" (UID: "fabbd2b7-32be-4881-ad20-1983b48afe8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.032325 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabbd2b7-32be-4881-ad20-1983b48afe8e-kube-api-access-ldrck" (OuterVolumeSpecName: "kube-api-access-ldrck") pod "fabbd2b7-32be-4881-ad20-1983b48afe8e" (UID: "fabbd2b7-32be-4881-ad20-1983b48afe8e"). InnerVolumeSpecName "kube-api-access-ldrck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.096326 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabbd2b7-32be-4881-ad20-1983b48afe8e" (UID: "fabbd2b7-32be-4881-ad20-1983b48afe8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.129896 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.129963 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldrck\" (UniqueName: \"kubernetes.io/projected/fabbd2b7-32be-4881-ad20-1983b48afe8e-kube-api-access-ldrck\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.129988 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabbd2b7-32be-4881-ad20-1983b48afe8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.582910 4959 generic.go:334] "Generic (PLEG): container finished" podID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerID="4ce0030f059a12d7f0727c9017354a33c3e03848c8575f59b82796f849a37564" exitCode=0 Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.583200 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" event={"ID":"378bcded-d2db-4b72-bcdf-170b163dcdc4","Type":"ContainerDied","Data":"4ce0030f059a12d7f0727c9017354a33c3e03848c8575f59b82796f849a37564"} Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.587201 4959 generic.go:334] "Generic (PLEG): container finished" podID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerID="a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404" exitCode=0 Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.587243 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerDied","Data":"a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404"} Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.587266 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5md62" event={"ID":"fabbd2b7-32be-4881-ad20-1983b48afe8e","Type":"ContainerDied","Data":"06ac389719f7d14804f71cae6f94f969040d34ca387daac5f34fae407e65e404"} Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.587245 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5md62" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.587288 4959 scope.go:117] "RemoveContainer" containerID="a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.611653 4959 scope.go:117] "RemoveContainer" containerID="a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.632593 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5md62"] Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.640152 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5md62"] Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.649705 4959 scope.go:117] "RemoveContainer" containerID="4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.665778 4959 scope.go:117] "RemoveContainer" containerID="a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404" Jan 21 13:25:12 crc kubenswrapper[4959]: E0121 13:25:12.666282 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404\": container with ID starting with a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404 not found: ID does not exist" containerID="a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.666332 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404"} err="failed to get container status \"a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404\": rpc error: code = NotFound desc = could not find container \"a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404\": container with ID starting with a1959372817be22a4ff8086c31f3f77f71fae74f462206af7b136da06b0a6404 not found: ID does not exist" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.666358 4959 scope.go:117] "RemoveContainer" containerID="a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0" Jan 21 13:25:12 crc kubenswrapper[4959]: E0121 13:25:12.667141 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0\": container with ID starting with a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0 not found: ID does not exist" containerID="a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.667221 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0"} err="failed to get container status \"a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0\": rpc error: code = NotFound desc = could not find container \"a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0\": container with ID starting with a8a41838960200b350dda855a30ada9c695a0246a3b0782ba057c6d6187d22a0 not found: ID does not exist" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.667256 4959 scope.go:117] "RemoveContainer" containerID="4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed" Jan 21 13:25:12 crc kubenswrapper[4959]: E0121 13:25:12.667637 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed\": container with ID starting with 4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed not found: ID does not exist" containerID="4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed" Jan 21 13:25:12 crc kubenswrapper[4959]: I0121 13:25:12.667670 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed"} err="failed to get container status \"4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed\": rpc error: code = NotFound desc = could not find container \"4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed\": container with ID starting with 4beda00f9d3a4cdca5488d4838b799dfbb9db2a6b257971dde894611ce0722ed not found: ID does not exist" Jan 21 13:25:13 crc kubenswrapper[4959]: I0121 13:25:13.294159 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" path="/var/lib/kubelet/pods/fabbd2b7-32be-4881-ad20-1983b48afe8e/volumes" Jan 21 13:25:13 crc kubenswrapper[4959]: I0121 13:25:13.594605 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" event={"ID":"378bcded-d2db-4b72-bcdf-170b163dcdc4","Type":"ContainerStarted","Data":"57b0751c33401c3cfc1779aabe050f8a26f985e432e93044041bd6745c864867"} Jan 21 13:25:14 crc kubenswrapper[4959]: I0121 13:25:14.608955 4959 generic.go:334] "Generic (PLEG): container finished" podID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerID="57b0751c33401c3cfc1779aabe050f8a26f985e432e93044041bd6745c864867" exitCode=0 Jan 21 13:25:14 crc kubenswrapper[4959]: I0121 13:25:14.609074 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" event={"ID":"378bcded-d2db-4b72-bcdf-170b163dcdc4","Type":"ContainerDied","Data":"57b0751c33401c3cfc1779aabe050f8a26f985e432e93044041bd6745c864867"} Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.857744 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.887560 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7ksg\" (UniqueName: \"kubernetes.io/projected/378bcded-d2db-4b72-bcdf-170b163dcdc4-kube-api-access-w7ksg\") pod \"378bcded-d2db-4b72-bcdf-170b163dcdc4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.887761 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-bundle\") pod \"378bcded-d2db-4b72-bcdf-170b163dcdc4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.887804 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-util\") pod \"378bcded-d2db-4b72-bcdf-170b163dcdc4\" (UID: \"378bcded-d2db-4b72-bcdf-170b163dcdc4\") " Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.888971 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-bundle" (OuterVolumeSpecName: "bundle") pod "378bcded-d2db-4b72-bcdf-170b163dcdc4" (UID: "378bcded-d2db-4b72-bcdf-170b163dcdc4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.894324 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378bcded-d2db-4b72-bcdf-170b163dcdc4-kube-api-access-w7ksg" (OuterVolumeSpecName: "kube-api-access-w7ksg") pod "378bcded-d2db-4b72-bcdf-170b163dcdc4" (UID: "378bcded-d2db-4b72-bcdf-170b163dcdc4"). InnerVolumeSpecName "kube-api-access-w7ksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.898173 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-util" (OuterVolumeSpecName: "util") pod "378bcded-d2db-4b72-bcdf-170b163dcdc4" (UID: "378bcded-d2db-4b72-bcdf-170b163dcdc4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.988821 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7ksg\" (UniqueName: \"kubernetes.io/projected/378bcded-d2db-4b72-bcdf-170b163dcdc4-kube-api-access-w7ksg\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.988870 4959 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:15 crc kubenswrapper[4959]: I0121 13:25:15.988884 4959 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/378bcded-d2db-4b72-bcdf-170b163dcdc4-util\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:16 crc kubenswrapper[4959]: I0121 13:25:16.622120 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" event={"ID":"378bcded-d2db-4b72-bcdf-170b163dcdc4","Type":"ContainerDied","Data":"38140305272a4e61247d468c0ef52cd5233fc04c917ccc22f7aded2575f52435"} Jan 21 13:25:16 crc kubenswrapper[4959]: I0121 13:25:16.622443 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38140305272a4e61247d468c0ef52cd5233fc04c917ccc22f7aded2575f52435" Jan 21 13:25:16 crc kubenswrapper[4959]: I0121 13:25:16.622160 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.824456 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc"] Jan 21 13:25:19 crc kubenswrapper[4959]: E0121 13:25:19.825132 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="pull" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825149 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="pull" Jan 21 13:25:19 crc kubenswrapper[4959]: E0121 13:25:19.825168 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="extract" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825175 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="extract" Jan 21 13:25:19 crc kubenswrapper[4959]: E0121 13:25:19.825187 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="registry-server" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825195 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="registry-server" Jan 21 13:25:19 crc kubenswrapper[4959]: E0121 13:25:19.825205 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="util" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825212 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="util" Jan 21 13:25:19 crc kubenswrapper[4959]: E0121 13:25:19.825224 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="extract-content" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825231 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="extract-content" Jan 21 13:25:19 crc kubenswrapper[4959]: E0121 13:25:19.825251 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="extract-utilities" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825259 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="extract-utilities" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825390 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabbd2b7-32be-4881-ad20-1983b48afe8e" containerName="registry-server" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825419 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="378bcded-d2db-4b72-bcdf-170b163dcdc4" containerName="extract" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.825921 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.828481 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-89k6r" Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.849332 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc"] Jan 21 13:25:19 crc kubenswrapper[4959]: I0121 13:25:19.939914 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whj84\" (UniqueName: \"kubernetes.io/projected/106d5d1f-03fd-4706-96e9-f56588efc2ef-kube-api-access-whj84\") pod \"openstack-operator-controller-init-6c8559dcdb-l5dgc\" (UID: \"106d5d1f-03fd-4706-96e9-f56588efc2ef\") " pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:25:20 crc kubenswrapper[4959]: I0121 13:25:20.041193 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whj84\" (UniqueName: \"kubernetes.io/projected/106d5d1f-03fd-4706-96e9-f56588efc2ef-kube-api-access-whj84\") pod \"openstack-operator-controller-init-6c8559dcdb-l5dgc\" (UID: \"106d5d1f-03fd-4706-96e9-f56588efc2ef\") " pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:25:20 crc kubenswrapper[4959]: I0121 13:25:20.062442 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whj84\" (UniqueName: \"kubernetes.io/projected/106d5d1f-03fd-4706-96e9-f56588efc2ef-kube-api-access-whj84\") pod \"openstack-operator-controller-init-6c8559dcdb-l5dgc\" (UID: \"106d5d1f-03fd-4706-96e9-f56588efc2ef\") " pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:25:20 crc kubenswrapper[4959]: I0121 13:25:20.143740 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:25:20 crc kubenswrapper[4959]: I0121 13:25:20.566949 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc"] Jan 21 13:25:20 crc kubenswrapper[4959]: I0121 13:25:20.645476 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" event={"ID":"106d5d1f-03fd-4706-96e9-f56588efc2ef","Type":"ContainerStarted","Data":"96d5ec40f7fd7c66bc4c9d76a87d113fd1232dde10ab1e0879237c22ff7897b1"} Jan 21 13:25:28 crc kubenswrapper[4959]: I0121 13:25:28.818052 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" event={"ID":"106d5d1f-03fd-4706-96e9-f56588efc2ef","Type":"ContainerStarted","Data":"0b5c0653911dec6d88abd2e9238569983bbb610bf4aed7c152e9736ffdb3dcd7"} Jan 21 13:25:28 crc kubenswrapper[4959]: I0121 13:25:28.819483 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:25:28 crc kubenswrapper[4959]: I0121 13:25:28.847077 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" podStartSLOduration=1.910283391 podStartE2EDuration="9.847059168s" podCreationTimestamp="2026-01-21 13:25:19 +0000 UTC" firstStartedPulling="2026-01-21 13:25:20.578548104 +0000 UTC m=+981.541578647" lastFinishedPulling="2026-01-21 13:25:28.515323881 +0000 UTC m=+989.478354424" observedRunningTime="2026-01-21 13:25:28.84529109 +0000 UTC m=+989.808321633" watchObservedRunningTime="2026-01-21 13:25:28.847059168 +0000 UTC m=+989.810089711" Jan 21 13:25:40 crc kubenswrapper[4959]: I0121 13:25:40.147730 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6c8559dcdb-l5dgc" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.137632 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.138997 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.145147 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k2p4v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.150205 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.172917 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-d69ql"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.173705 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.175747 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r6gjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.178568 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.179318 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.181073 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8zlj5" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.198370 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhf7\" (UniqueName: \"kubernetes.io/projected/988f7f11-664f-4f70-9b38-2852dd3b17a0-kube-api-access-tzhf7\") pod \"designate-operator-controller-manager-9f958b845-d69ql\" (UID: \"988f7f11-664f-4f70-9b38-2852dd3b17a0\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.198452 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrszq\" (UniqueName: \"kubernetes.io/projected/a588ba98-33be-46aa-a582-4403d3a09a95-kube-api-access-wrszq\") pod \"barbican-operator-controller-manager-7ddb5c749-f49mc\" (UID: \"a588ba98-33be-46aa-a582-4403d3a09a95\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.198503 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpt5s\" (UniqueName: \"kubernetes.io/projected/cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7-kube-api-access-cpt5s\") pod \"cinder-operator-controller-manager-9b68f5989-64kwb\" (UID: \"cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.211668 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.237455 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-d69ql"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.251428 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.252500 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.266551 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zqlf4" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.268688 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.269898 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.276898 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.277924 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.278428 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w9fmj" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.279674 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-99gv4" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.302134 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqfz\" (UniqueName: \"kubernetes.io/projected/8075108b-d9e1-40d4-9e2e-4faa59061778-kube-api-access-clqfz\") pod \"glance-operator-controller-manager-c6994669c-pp9dq\" (UID: \"8075108b-d9e1-40d4-9e2e-4faa59061778\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.302208 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhf7\" (UniqueName: \"kubernetes.io/projected/988f7f11-664f-4f70-9b38-2852dd3b17a0-kube-api-access-tzhf7\") pod \"designate-operator-controller-manager-9f958b845-d69ql\" (UID: \"988f7f11-664f-4f70-9b38-2852dd3b17a0\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.302242 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfn9\" (UniqueName: \"kubernetes.io/projected/a6ef5ba7-019c-416f-9003-54c5ce70f01a-kube-api-access-qhfn9\") pod \"horizon-operator-controller-manager-77d5c5b54f-g4bs8\" (UID: \"a6ef5ba7-019c-416f-9003-54c5ce70f01a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.302276 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrszq\" (UniqueName: \"kubernetes.io/projected/a588ba98-33be-46aa-a582-4403d3a09a95-kube-api-access-wrszq\") pod \"barbican-operator-controller-manager-7ddb5c749-f49mc\" (UID: \"a588ba98-33be-46aa-a582-4403d3a09a95\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.302306 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpt5s\" (UniqueName: \"kubernetes.io/projected/cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7-kube-api-access-cpt5s\") pod \"cinder-operator-controller-manager-9b68f5989-64kwb\" (UID: \"cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.302377 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvb9d\" (UniqueName: \"kubernetes.io/projected/da20d161-5e78-4c3d-a021-75244caefb16-kube-api-access-mvb9d\") pod \"heat-operator-controller-manager-594c8c9d5d-n54x8\" (UID: \"da20d161-5e78-4c3d-a021-75244caefb16\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.319068 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.319177 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.319194 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.319577 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.321434 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.329138 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.333229 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.334360 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-775dm" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.336899 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6t2kw" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.337063 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.338488 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhf7\" (UniqueName: \"kubernetes.io/projected/988f7f11-664f-4f70-9b38-2852dd3b17a0-kube-api-access-tzhf7\") pod \"designate-operator-controller-manager-9f958b845-d69ql\" (UID: \"988f7f11-664f-4f70-9b38-2852dd3b17a0\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.340154 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.344658 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrszq\" (UniqueName: \"kubernetes.io/projected/a588ba98-33be-46aa-a582-4403d3a09a95-kube-api-access-wrszq\") pod \"barbican-operator-controller-manager-7ddb5c749-f49mc\" (UID: \"a588ba98-33be-46aa-a582-4403d3a09a95\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.349270 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.350288 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.351073 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpt5s\" (UniqueName: \"kubernetes.io/projected/cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7-kube-api-access-cpt5s\") pod \"cinder-operator-controller-manager-9b68f5989-64kwb\" (UID: \"cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.356508 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4zs65" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.357173 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.377563 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.386444 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.387584 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.389759 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-plc5w" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.403912 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkq8m\" (UniqueName: \"kubernetes.io/projected/ed8f3e55-7ed1-4794-8171-461cf3ebc132-kube-api-access-wkq8m\") pod \"ironic-operator-controller-manager-78757b4889-fzq84\" (UID: \"ed8f3e55-7ed1-4794-8171-461cf3ebc132\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.403983 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpckr\" (UniqueName: \"kubernetes.io/projected/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-kube-api-access-qpckr\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.404021 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvb9d\" (UniqueName: \"kubernetes.io/projected/da20d161-5e78-4c3d-a021-75244caefb16-kube-api-access-mvb9d\") pod \"heat-operator-controller-manager-594c8c9d5d-n54x8\" (UID: \"da20d161-5e78-4c3d-a021-75244caefb16\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.404063 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqfz\" (UniqueName: \"kubernetes.io/projected/8075108b-d9e1-40d4-9e2e-4faa59061778-kube-api-access-clqfz\") pod \"glance-operator-controller-manager-c6994669c-pp9dq\" (UID: \"8075108b-d9e1-40d4-9e2e-4faa59061778\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.404087 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.404131 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfn9\" (UniqueName: \"kubernetes.io/projected/a6ef5ba7-019c-416f-9003-54c5ce70f01a-kube-api-access-qhfn9\") pod \"horizon-operator-controller-manager-77d5c5b54f-g4bs8\" (UID: \"a6ef5ba7-019c-416f-9003-54c5ce70f01a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.404244 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqmhb\" (UniqueName: \"kubernetes.io/projected/49ec4962-8c60-4bd2-9ada-8f25cc21baa4-kube-api-access-mqmhb\") pod \"keystone-operator-controller-manager-767fdc4f47-dfzqp\" (UID: \"49ec4962-8c60-4bd2-9ada-8f25cc21baa4\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.408975 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.410186 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.418556 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-shgwj" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.418748 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.431608 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.437819 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-crgjd"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.438681 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.449806 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jjjx2" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.450013 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.450949 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.458620 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xstlj" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.464592 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.469202 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvb9d\" (UniqueName: \"kubernetes.io/projected/da20d161-5e78-4c3d-a021-75244caefb16-kube-api-access-mvb9d\") pod \"heat-operator-controller-manager-594c8c9d5d-n54x8\" (UID: \"da20d161-5e78-4c3d-a021-75244caefb16\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.487161 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.488341 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfn9\" (UniqueName: \"kubernetes.io/projected/a6ef5ba7-019c-416f-9003-54c5ce70f01a-kube-api-access-qhfn9\") pod \"horizon-operator-controller-manager-77d5c5b54f-g4bs8\" (UID: \"a6ef5ba7-019c-416f-9003-54c5ce70f01a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.488428 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.494682 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.501256 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-crgjd"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512007 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqfz\" (UniqueName: \"kubernetes.io/projected/8075108b-d9e1-40d4-9e2e-4faa59061778-kube-api-access-clqfz\") pod \"glance-operator-controller-manager-c6994669c-pp9dq\" (UID: \"8075108b-d9e1-40d4-9e2e-4faa59061778\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512084 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512762 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a24ac487-ea43-40fd-b6ea-cd7740cf80ce-kube-api-access-2t7v2\") pod \"neutron-operator-controller-manager-cb4666565-h4c6v\" (UID: \"a24ac487-ea43-40fd-b6ea-cd7740cf80ce\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512816 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbv2\" (UniqueName: \"kubernetes.io/projected/d3753491-e2ab-4cf4-b8be-7de464734343-kube-api-access-fhbv2\") pod \"mariadb-operator-controller-manager-c87fff755-5t88r\" (UID: \"d3753491-e2ab-4cf4-b8be-7de464734343\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512852 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqmhb\" (UniqueName: \"kubernetes.io/projected/49ec4962-8c60-4bd2-9ada-8f25cc21baa4-kube-api-access-mqmhb\") pod \"keystone-operator-controller-manager-767fdc4f47-dfzqp\" (UID: \"49ec4962-8c60-4bd2-9ada-8f25cc21baa4\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512879 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkq8m\" (UniqueName: \"kubernetes.io/projected/ed8f3e55-7ed1-4794-8171-461cf3ebc132-kube-api-access-wkq8m\") pod \"ironic-operator-controller-manager-78757b4889-fzq84\" (UID: \"ed8f3e55-7ed1-4794-8171-461cf3ebc132\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512911 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc8ql\" (UniqueName: \"kubernetes.io/projected/ae0b11f6-2763-4884-b37b-ec8dc6548a79-kube-api-access-bc8ql\") pod \"nova-operator-controller-manager-65849867d6-crgjd\" (UID: \"ae0b11f6-2763-4884-b37b-ec8dc6548a79\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512945 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpckr\" (UniqueName: \"kubernetes.io/projected/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-kube-api-access-qpckr\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.512978 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh64w\" (UniqueName: \"kubernetes.io/projected/3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d-kube-api-access-lh64w\") pod \"octavia-operator-controller-manager-7fc9b76cf6-kslqm\" (UID: \"3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.513027 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqjt\" (UniqueName: \"kubernetes.io/projected/fb0839da-0f44-43dd-a240-72c0f032f30a-kube-api-access-8qqjt\") pod \"manila-operator-controller-manager-c5fd576c9-gkv5c\" (UID: \"fb0839da-0f44-43dd-a240-72c0f032f30a\") " pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.513063 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.513215 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:05 crc kubenswrapper[4959]: E0121 13:26:05.514548 4959 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:05 crc kubenswrapper[4959]: E0121 13:26:05.514715 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert podName:dd86c02d-b4ab-42e5-9a16-a968c0aeba96 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:06.014623756 +0000 UTC m=+1026.977654299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert") pod "infra-operator-controller-manager-77c48c7859-96qjh" (UID: "dd86c02d-b4ab-42e5-9a16-a968c0aeba96") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.537388 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mbwd2" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.545307 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.546107 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.546729 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:05 crc kubenswrapper[4959]: W0121 13:26:05.551977 4959 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: secrets "openstack-baremetal-operator-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 21 13:26:05 crc kubenswrapper[4959]: E0121 13:26:05.552025 4959 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-baremetal-operator-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:26:05 crc kubenswrapper[4959]: W0121 13:26:05.561363 4959 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9xpgz": failed to list *v1.Secret: secrets "openstack-baremetal-operator-controller-manager-dockercfg-9xpgz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 21 13:26:05 crc kubenswrapper[4959]: E0121 13:26:05.561415 4959 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-controller-manager-dockercfg-9xpgz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-baremetal-operator-controller-manager-dockercfg-9xpgz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.561740 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.562572 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.571933 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qvl4h" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.572720 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqmhb\" (UniqueName: \"kubernetes.io/projected/49ec4962-8c60-4bd2-9ada-8f25cc21baa4-kube-api-access-mqmhb\") pod \"keystone-operator-controller-manager-767fdc4f47-dfzqp\" (UID: \"49ec4962-8c60-4bd2-9ada-8f25cc21baa4\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.586515 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.593954 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpckr\" (UniqueName: \"kubernetes.io/projected/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-kube-api-access-qpckr\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.596106 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.598319 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkq8m\" (UniqueName: \"kubernetes.io/projected/ed8f3e55-7ed1-4794-8171-461cf3ebc132-kube-api-access-wkq8m\") pod \"ironic-operator-controller-manager-78757b4889-fzq84\" (UID: \"ed8f3e55-7ed1-4794-8171-461cf3ebc132\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.613949 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc8ql\" (UniqueName: \"kubernetes.io/projected/ae0b11f6-2763-4884-b37b-ec8dc6548a79-kube-api-access-bc8ql\") pod \"nova-operator-controller-manager-65849867d6-crgjd\" (UID: \"ae0b11f6-2763-4884-b37b-ec8dc6548a79\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.614020 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh64w\" (UniqueName: \"kubernetes.io/projected/3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d-kube-api-access-lh64w\") pod \"octavia-operator-controller-manager-7fc9b76cf6-kslqm\" (UID: \"3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.614067 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.618858 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.664926 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlg6\" (UniqueName: \"kubernetes.io/projected/1c5d42e4-5a3b-4cea-b0a7-3f334d801f22-kube-api-access-2nlg6\") pod \"ovn-operator-controller-manager-55db956ddc-6lsrf\" (UID: \"1c5d42e4-5a3b-4cea-b0a7-3f334d801f22\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.665035 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqjt\" (UniqueName: \"kubernetes.io/projected/fb0839da-0f44-43dd-a240-72c0f032f30a-kube-api-access-8qqjt\") pod \"manila-operator-controller-manager-c5fd576c9-gkv5c\" (UID: \"fb0839da-0f44-43dd-a240-72c0f032f30a\") " pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.665160 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a24ac487-ea43-40fd-b6ea-cd7740cf80ce-kube-api-access-2t7v2\") pod \"neutron-operator-controller-manager-cb4666565-h4c6v\" (UID: \"a24ac487-ea43-40fd-b6ea-cd7740cf80ce\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.665233 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwx7\" (UniqueName: \"kubernetes.io/projected/db113188-8b44-43d6-8e79-8231fbfff914-kube-api-access-hwwx7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.665287 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbv2\" (UniqueName: \"kubernetes.io/projected/d3753491-e2ab-4cf4-b8be-7de464734343-kube-api-access-fhbv2\") pod \"mariadb-operator-controller-manager-c87fff755-5t88r\" (UID: \"d3753491-e2ab-4cf4-b8be-7de464734343\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.672843 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.706725 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.737901 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.770746 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwwx7\" (UniqueName: \"kubernetes.io/projected/db113188-8b44-43d6-8e79-8231fbfff914-kube-api-access-hwwx7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.770880 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.770911 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlg6\" (UniqueName: \"kubernetes.io/projected/1c5d42e4-5a3b-4cea-b0a7-3f334d801f22-kube-api-access-2nlg6\") pod \"ovn-operator-controller-manager-55db956ddc-6lsrf\" (UID: \"1c5d42e4-5a3b-4cea-b0a7-3f334d801f22\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.772652 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.776420 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.805007 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh64w\" (UniqueName: \"kubernetes.io/projected/3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d-kube-api-access-lh64w\") pod \"octavia-operator-controller-manager-7fc9b76cf6-kslqm\" (UID: \"3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.809776 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4nr5z" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.822922 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.830242 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbv2\" (UniqueName: \"kubernetes.io/projected/d3753491-e2ab-4cf4-b8be-7de464734343-kube-api-access-fhbv2\") pod \"mariadb-operator-controller-manager-c87fff755-5t88r\" (UID: \"d3753491-e2ab-4cf4-b8be-7de464734343\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.830416 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.834275 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v"] Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.835143 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.843185 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc8ql\" (UniqueName: \"kubernetes.io/projected/ae0b11f6-2763-4884-b37b-ec8dc6548a79-kube-api-access-bc8ql\") pod \"nova-operator-controller-manager-65849867d6-crgjd\" (UID: \"ae0b11f6-2763-4884-b37b-ec8dc6548a79\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.848466 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zzzj7" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.859851 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.860768 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a24ac487-ea43-40fd-b6ea-cd7740cf80ce-kube-api-access-2t7v2\") pod \"neutron-operator-controller-manager-cb4666565-h4c6v\" (UID: \"a24ac487-ea43-40fd-b6ea-cd7740cf80ce\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.880127 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ncf\" (UniqueName: \"kubernetes.io/projected/2776361f-f7a5-452f-b847-f1370993200b-kube-api-access-d5ncf\") pod \"placement-operator-controller-manager-686df47fcb-6wjxl\" (UID: \"2776361f-f7a5-452f-b847-f1370993200b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.973614 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.974730 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.990462 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ncf\" (UniqueName: \"kubernetes.io/projected/2776361f-f7a5-452f-b847-f1370993200b-kube-api-access-d5ncf\") pod \"placement-operator-controller-manager-686df47fcb-6wjxl\" (UID: \"2776361f-f7a5-452f-b847-f1370993200b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:05 crc kubenswrapper[4959]: I0121 13:26:05.990556 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchr9\" (UniqueName: \"kubernetes.io/projected/082d43b2-0714-47d3-9f71-9d386e89b56f-kube-api-access-dchr9\") pod \"swift-operator-controller-manager-85dd56d4cc-hs86v\" (UID: \"082d43b2-0714-47d3-9f71-9d386e89b56f\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.000996 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.056897 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlg6\" (UniqueName: \"kubernetes.io/projected/1c5d42e4-5a3b-4cea-b0a7-3f334d801f22-kube-api-access-2nlg6\") pod \"ovn-operator-controller-manager-55db956ddc-6lsrf\" (UID: \"1c5d42e4-5a3b-4cea-b0a7-3f334d801f22\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.096605 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqjt\" (UniqueName: \"kubernetes.io/projected/fb0839da-0f44-43dd-a240-72c0f032f30a-kube-api-access-8qqjt\") pod \"manila-operator-controller-manager-c5fd576c9-gkv5c\" (UID: \"fb0839da-0f44-43dd-a240-72c0f032f30a\") " pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.103902 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.104840 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.113419 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.165204 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwwx7\" (UniqueName: \"kubernetes.io/projected/db113188-8b44-43d6-8e79-8231fbfff914-kube-api-access-hwwx7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.165223 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.165302 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.166630 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchr9\" (UniqueName: \"kubernetes.io/projected/082d43b2-0714-47d3-9f71-9d386e89b56f-kube-api-access-dchr9\") pod \"swift-operator-controller-manager-85dd56d4cc-hs86v\" (UID: \"082d43b2-0714-47d3-9f71-9d386e89b56f\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.166662 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.166777 4959 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.166813 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert podName:dd86c02d-b4ab-42e5-9a16-a968c0aeba96 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:07.166799772 +0000 UTC m=+1028.129830315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert") pod "infra-operator-controller-manager-77c48c7859-96qjh" (UID: "dd86c02d-b4ab-42e5-9a16-a968c0aeba96") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.192116 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-fz5jf" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.198969 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ncf\" (UniqueName: \"kubernetes.io/projected/2776361f-f7a5-452f-b847-f1370993200b-kube-api-access-d5ncf\") pod \"placement-operator-controller-manager-686df47fcb-6wjxl\" (UID: \"2776361f-f7a5-452f-b847-f1370993200b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.231565 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.236349 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.246509 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cczcw" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.267788 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6tsz\" (UniqueName: \"kubernetes.io/projected/061f7370-4309-4e68-97f3-f57e9832939b-kube-api-access-s6tsz\") pod \"telemetry-operator-controller-manager-5f8f495fcf-qzds5\" (UID: \"061f7370-4309-4e68-97f3-f57e9832939b\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.269231 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchr9\" (UniqueName: \"kubernetes.io/projected/082d43b2-0714-47d3-9f71-9d386e89b56f-kube-api-access-dchr9\") pod \"swift-operator-controller-manager-85dd56d4cc-hs86v\" (UID: \"082d43b2-0714-47d3-9f71-9d386e89b56f\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.299901 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.325335 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.326994 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.342215 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wn888" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.356561 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.359417 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.369810 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrxv\" (UniqueName: \"kubernetes.io/projected/9247c01e-fd0d-4fe6-8a9b-f50dec002cac-kube-api-access-lcrxv\") pod \"test-operator-controller-manager-7cd8bc9dbb-pcxjt\" (UID: \"9247c01e-fd0d-4fe6-8a9b-f50dec002cac\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.369885 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6tsz\" (UniqueName: \"kubernetes.io/projected/061f7370-4309-4e68-97f3-f57e9832939b-kube-api-access-s6tsz\") pod \"telemetry-operator-controller-manager-5f8f495fcf-qzds5\" (UID: \"061f7370-4309-4e68-97f3-f57e9832939b\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.369975 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29wf7\" (UniqueName: \"kubernetes.io/projected/747460d1-12de-4c88-b0d8-879ff7b62834-kube-api-access-29wf7\") pod \"watcher-operator-controller-manager-64cd966744-6jp8j\" (UID: \"747460d1-12de-4c88-b0d8-879ff7b62834\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.403534 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.451917 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6tsz\" (UniqueName: \"kubernetes.io/projected/061f7370-4309-4e68-97f3-f57e9832939b-kube-api-access-s6tsz\") pod \"telemetry-operator-controller-manager-5f8f495fcf-qzds5\" (UID: \"061f7370-4309-4e68-97f3-f57e9832939b\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.470707 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrxv\" (UniqueName: \"kubernetes.io/projected/9247c01e-fd0d-4fe6-8a9b-f50dec002cac-kube-api-access-lcrxv\") pod \"test-operator-controller-manager-7cd8bc9dbb-pcxjt\" (UID: \"9247c01e-fd0d-4fe6-8a9b-f50dec002cac\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.470804 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29wf7\" (UniqueName: \"kubernetes.io/projected/747460d1-12de-4c88-b0d8-879ff7b62834-kube-api-access-29wf7\") pod \"watcher-operator-controller-manager-64cd966744-6jp8j\" (UID: \"747460d1-12de-4c88-b0d8-879ff7b62834\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.471287 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.506434 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.527192 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.528115 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.530937 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wtp4x" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.530987 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrxv\" (UniqueName: \"kubernetes.io/projected/9247c01e-fd0d-4fe6-8a9b-f50dec002cac-kube-api-access-lcrxv\") pod \"test-operator-controller-manager-7cd8bc9dbb-pcxjt\" (UID: \"9247c01e-fd0d-4fe6-8a9b-f50dec002cac\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.531106 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.531884 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.540134 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29wf7\" (UniqueName: \"kubernetes.io/projected/747460d1-12de-4c88-b0d8-879ff7b62834-kube-api-access-29wf7\") pod \"watcher-operator-controller-manager-64cd966744-6jp8j\" (UID: \"747460d1-12de-4c88-b0d8-879ff7b62834\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.550536 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.571707 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.609151 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.609949 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.616311 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-t6mdx" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.625699 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.674250 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsll2\" (UniqueName: \"kubernetes.io/projected/b5d1151c-e9f0-4bc3-b0da-b3df5470a149-kube-api-access-vsll2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fp76n\" (UID: \"b5d1151c-e9f0-4bc3-b0da-b3df5470a149\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.674309 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.674342 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlk85\" (UniqueName: \"kubernetes.io/projected/c8660b47-58d0-48c2-8359-ec471c30158a-kube-api-access-mlk85\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.674385 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.700299 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc"] Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.701028 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9xpgz" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.703477 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.777609 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsll2\" (UniqueName: \"kubernetes.io/projected/b5d1151c-e9f0-4bc3-b0da-b3df5470a149-kube-api-access-vsll2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fp76n\" (UID: \"b5d1151c-e9f0-4bc3-b0da-b3df5470a149\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.777642 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.777670 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlk85\" (UniqueName: \"kubernetes.io/projected/c8660b47-58d0-48c2-8359-ec471c30158a-kube-api-access-mlk85\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.777705 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.777851 4959 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.777892 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:07.277878667 +0000 UTC m=+1028.240909210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "metrics-server-cert" not found Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.778880 4959 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.778922 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:07.278909205 +0000 UTC m=+1028.241939748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "webhook-server-cert" not found Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.783567 4959 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 13:26:06 crc kubenswrapper[4959]: E0121 13:26:06.783618 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert podName:db113188-8b44-43d6-8e79-8231fbfff914 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:07.283604432 +0000 UTC m=+1028.246634975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" (UID: "db113188-8b44-43d6-8e79-8231fbfff914") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.819131 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlk85\" (UniqueName: \"kubernetes.io/projected/c8660b47-58d0-48c2-8359-ec471c30158a-kube-api-access-mlk85\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.819268 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsll2\" (UniqueName: \"kubernetes.io/projected/b5d1151c-e9f0-4bc3-b0da-b3df5470a149-kube-api-access-vsll2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fp76n\" (UID: \"b5d1151c-e9f0-4bc3-b0da-b3df5470a149\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" Jan 21 13:26:06 crc kubenswrapper[4959]: I0121 13:26:06.827197 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.027775 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.077508 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.144687 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb"] Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.169742 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq"] Jan 21 13:26:07 crc kubenswrapper[4959]: W0121 13:26:07.192658 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc5305f2_72f7_40a6_b0c9_d3abaf7ea1c7.slice/crio-80ca0a48c1d2a21ff42d50f86cb83cc903ce2ecb14d6aef4daef37e9b0b768a7 WatchSource:0}: Error finding container 80ca0a48c1d2a21ff42d50f86cb83cc903ce2ecb14d6aef4daef37e9b0b768a7: Status 404 returned error can't find the container with id 80ca0a48c1d2a21ff42d50f86cb83cc903ce2ecb14d6aef4daef37e9b0b768a7 Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.232988 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" event={"ID":"8075108b-d9e1-40d4-9e2e-4faa59061778","Type":"ContainerStarted","Data":"54dc68d7362bcef593c6bd6032d1f6314239bc77e71350e785629c18edaa76a2"} Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.234913 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" event={"ID":"a588ba98-33be-46aa-a582-4403d3a09a95","Type":"ContainerStarted","Data":"5584c38d1a9db627fca47a5d6b4476624605162c863439e2e44099ee5a49de0d"} Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.238365 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" event={"ID":"cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7","Type":"ContainerStarted","Data":"80ca0a48c1d2a21ff42d50f86cb83cc903ce2ecb14d6aef4daef37e9b0b768a7"} Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.249436 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.249597 4959 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.249694 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert podName:dd86c02d-b4ab-42e5-9a16-a968c0aeba96 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:09.249671948 +0000 UTC m=+1030.212702491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert") pod "infra-operator-controller-manager-77c48c7859-96qjh" (UID: "dd86c02d-b4ab-42e5-9a16-a968c0aeba96") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.310729 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8"] Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.347567 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-d69ql"] Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.350712 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.350810 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.350851 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.351558 4959 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.351857 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:08.351839236 +0000 UTC m=+1029.314869789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "webhook-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.351916 4959 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.351945 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:08.351935038 +0000 UTC m=+1029.314965581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "metrics-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.351995 4959 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: E0121 13:26:07.352022 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert podName:db113188-8b44-43d6-8e79-8231fbfff914 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:08.35201381 +0000 UTC m=+1029.315044353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" (UID: "db113188-8b44-43d6-8e79-8231fbfff914") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.652165 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp"] Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.659146 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84"] Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.804321 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-crgjd"] Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.810284 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8"] Jan 21 13:26:07 crc kubenswrapper[4959]: W0121 13:26:07.814145 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae0b11f6_2763_4884_b37b_ec8dc6548a79.slice/crio-3b25cdf41e8da2175a05af3499b55544d1ded48bcf4b52f6744a13b87862ce52 WatchSource:0}: Error finding container 3b25cdf41e8da2175a05af3499b55544d1ded48bcf4b52f6744a13b87862ce52: Status 404 returned error can't find the container with id 3b25cdf41e8da2175a05af3499b55544d1ded48bcf4b52f6744a13b87862ce52 Jan 21 13:26:07 crc kubenswrapper[4959]: I0121 13:26:07.828970 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.094123 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.106292 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.144860 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf"] Jan 21 13:26:08 crc kubenswrapper[4959]: W0121 13:26:08.145468 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2776361f_f7a5_452f_b847_f1370993200b.slice/crio-12691bea4c3fa325baedfb96fd546afcbbbe9399f2af63c5bb9a1c3bb925d9c0 WatchSource:0}: Error finding container 12691bea4c3fa325baedfb96fd546afcbbbe9399f2af63c5bb9a1c3bb925d9c0: Status 404 returned error can't find the container with id 12691bea4c3fa325baedfb96fd546afcbbbe9399f2af63c5bb9a1c3bb925d9c0 Jan 21 13:26:08 crc kubenswrapper[4959]: W0121 13:26:08.154021 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c5d42e4_5a3b_4cea_b0a7_3f334d801f22.slice/crio-5efe03cd62861634d4d4ede48132e20a691f3a80d8fce2e0280d1adb729edd04 WatchSource:0}: Error finding container 5efe03cd62861634d4d4ede48132e20a691f3a80d8fce2e0280d1adb729edd04: Status 404 returned error can't find the container with id 5efe03cd62861634d4d4ede48132e20a691f3a80d8fce2e0280d1adb729edd04 Jan 21 13:26:08 crc kubenswrapper[4959]: W0121 13:26:08.155807 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod747460d1_12de_4c88_b0d8_879ff7b62834.slice/crio-b18c1d87b1b0ea24925c8ee5d7fbe1d8c8ba45ffc2805444eeeec584edc63ea4 WatchSource:0}: Error finding container b18c1d87b1b0ea24925c8ee5d7fbe1d8c8ba45ffc2805444eeeec584edc63ea4: Status 404 returned error can't find the container with id b18c1d87b1b0ea24925c8ee5d7fbe1d8c8ba45ffc2805444eeeec584edc63ea4 Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.165842 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.179756 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j"] Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.190887 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nlg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-6lsrf_openstack-operators(1c5d42e4-5a3b-4cea-b0a7-3f334d801f22): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.193208 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" podUID="1c5d42e4-5a3b-4cea-b0a7-3f334d801f22" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.196459 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt"] Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.199416 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2t7v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-h4c6v_openstack-operators(a24ac487-ea43-40fd-b6ea-cd7740cf80ce): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.200966 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" podUID="a24ac487-ea43-40fd-b6ea-cd7740cf80ce" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.227183 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.250071 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.263386 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm"] Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.268451 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lcrxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-pcxjt_openstack-operators(9247c01e-fd0d-4fe6-8a9b-f50dec002cac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.270181 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" podUID="9247c01e-fd0d-4fe6-8a9b-f50dec002cac" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.274424 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" event={"ID":"ed8f3e55-7ed1-4794-8171-461cf3ebc132","Type":"ContainerStarted","Data":"650be39f979bb687aa3b9a3d2c23a7d7df8df6b02ce5539d279cb85033ae831b"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.289611 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n"] Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.297382 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" event={"ID":"a6ef5ba7-019c-416f-9003-54c5ce70f01a","Type":"ContainerStarted","Data":"2c110a87de3f5e2a1e81b6df6437c0910b364eb9d04eb5e41e8e20d2fae8575d"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.307595 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" event={"ID":"061f7370-4309-4e68-97f3-f57e9832939b","Type":"ContainerStarted","Data":"4e69092766d6830718b55a4f10ddd21ac1eea6e5d954c2a41cfb56f49e10911a"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.308719 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" event={"ID":"3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d","Type":"ContainerStarted","Data":"3c80fab81100237c314204aa5cb7e6783adab10b575d3b3876e1f1cc822d4020"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.313803 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" event={"ID":"747460d1-12de-4c88-b0d8-879ff7b62834","Type":"ContainerStarted","Data":"b18c1d87b1b0ea24925c8ee5d7fbe1d8c8ba45ffc2805444eeeec584edc63ea4"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.315267 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" event={"ID":"ae0b11f6-2763-4884-b37b-ec8dc6548a79","Type":"ContainerStarted","Data":"3b25cdf41e8da2175a05af3499b55544d1ded48bcf4b52f6744a13b87862ce52"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.321979 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" event={"ID":"fb0839da-0f44-43dd-a240-72c0f032f30a","Type":"ContainerStarted","Data":"ba471f27bc9cfab7e9c8fdda013eb0477c3a8d994767099eff4a024573b622af"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.323366 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" event={"ID":"d3753491-e2ab-4cf4-b8be-7de464734343","Type":"ContainerStarted","Data":"445b69eff59a4959ab2cf39efe3380e4d2fbd00fb55506de80e9a908963862a8"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.368637 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" event={"ID":"a24ac487-ea43-40fd-b6ea-cd7740cf80ce","Type":"ContainerStarted","Data":"8411bf878917672ecddb7bb5e134af2082ce4505ad9658fbe5857a094b007a20"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.376368 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.376426 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.376524 4959 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.376537 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.376582 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:10.376565116 +0000 UTC m=+1031.339595649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "metrics-server-cert" not found Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.376644 4959 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.376678 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:10.376667748 +0000 UTC m=+1031.339698301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "webhook-server-cert" not found Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.377044 4959 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.377085 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert podName:db113188-8b44-43d6-8e79-8231fbfff914 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:10.377073969 +0000 UTC m=+1031.340104512 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" (UID: "db113188-8b44-43d6-8e79-8231fbfff914") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.382982 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" podUID="a24ac487-ea43-40fd-b6ea-cd7740cf80ce" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.387342 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" event={"ID":"1c5d42e4-5a3b-4cea-b0a7-3f334d801f22","Type":"ContainerStarted","Data":"5efe03cd62861634d4d4ede48132e20a691f3a80d8fce2e0280d1adb729edd04"} Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.388841 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" podUID="1c5d42e4-5a3b-4cea-b0a7-3f334d801f22" Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.389010 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsll2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fp76n_openstack-operators(b5d1151c-e9f0-4bc3-b0da-b3df5470a149): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.389270 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" event={"ID":"988f7f11-664f-4f70-9b38-2852dd3b17a0","Type":"ContainerStarted","Data":"edcb36a45179583cc02176b728e27f705626b97ea672fc3ebcc24c9d89363d7d"} Jan 21 13:26:08 crc kubenswrapper[4959]: E0121 13:26:08.390453 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" podUID="b5d1151c-e9f0-4bc3-b0da-b3df5470a149" Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.391271 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" event={"ID":"49ec4962-8c60-4bd2-9ada-8f25cc21baa4","Type":"ContainerStarted","Data":"c40b3422438bb7db4909a877d5bf8a8b2aab28f3484adeaa3bde97a0ee2d5b17"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.396344 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" event={"ID":"082d43b2-0714-47d3-9f71-9d386e89b56f","Type":"ContainerStarted","Data":"2497043a98ec6bcc75d5e3bf03aa345a023d4f1f1e940802d0baec57f73158df"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.400475 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" event={"ID":"da20d161-5e78-4c3d-a021-75244caefb16","Type":"ContainerStarted","Data":"de05a8c66bf5fe3c71085c6f253bc51860c65f3bb33410c5583c1b6e7ba38a98"} Jan 21 13:26:08 crc kubenswrapper[4959]: I0121 13:26:08.413319 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" event={"ID":"2776361f-f7a5-452f-b847-f1370993200b","Type":"ContainerStarted","Data":"12691bea4c3fa325baedfb96fd546afcbbbe9399f2af63c5bb9a1c3bb925d9c0"} Jan 21 13:26:09 crc kubenswrapper[4959]: I0121 13:26:09.288332 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:09 crc kubenswrapper[4959]: E0121 13:26:09.288497 4959 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:09 crc kubenswrapper[4959]: E0121 13:26:09.288547 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert podName:dd86c02d-b4ab-42e5-9a16-a968c0aeba96 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:13.288532671 +0000 UTC m=+1034.251563214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert") pod "infra-operator-controller-manager-77c48c7859-96qjh" (UID: "dd86c02d-b4ab-42e5-9a16-a968c0aeba96") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:09 crc kubenswrapper[4959]: I0121 13:26:09.428793 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" event={"ID":"9247c01e-fd0d-4fe6-8a9b-f50dec002cac","Type":"ContainerStarted","Data":"c3969702c0779fa271af6f36109f5e3d7c2332ab6cad708d39bb38473cc5d52c"} Jan 21 13:26:09 crc kubenswrapper[4959]: E0121 13:26:09.430228 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" podUID="9247c01e-fd0d-4fe6-8a9b-f50dec002cac" Jan 21 13:26:09 crc kubenswrapper[4959]: I0121 13:26:09.430403 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" event={"ID":"b5d1151c-e9f0-4bc3-b0da-b3df5470a149","Type":"ContainerStarted","Data":"94a43d2505107be70cdb427dd03878abead0acc596b5ea61bf3ae5cc00e1665f"} Jan 21 13:26:09 crc kubenswrapper[4959]: E0121 13:26:09.431656 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" podUID="a24ac487-ea43-40fd-b6ea-cd7740cf80ce" Jan 21 13:26:09 crc kubenswrapper[4959]: E0121 13:26:09.431762 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" podUID="1c5d42e4-5a3b-4cea-b0a7-3f334d801f22" Jan 21 13:26:09 crc kubenswrapper[4959]: E0121 13:26:09.432654 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" podUID="b5d1151c-e9f0-4bc3-b0da-b3df5470a149" Jan 21 13:26:10 crc kubenswrapper[4959]: I0121 13:26:10.407495 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:10 crc kubenswrapper[4959]: I0121 13:26:10.407969 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:10 crc kubenswrapper[4959]: I0121 13:26:10.408006 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.408608 4959 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.408699 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:14.408681685 +0000 UTC m=+1035.371712218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "webhook-server-cert" not found Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.408965 4959 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.409025 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:14.409011254 +0000 UTC m=+1035.372041797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "metrics-server-cert" not found Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.409079 4959 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.409115 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert podName:db113188-8b44-43d6-8e79-8231fbfff914 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:14.409109346 +0000 UTC m=+1035.372139889 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" (UID: "db113188-8b44-43d6-8e79-8231fbfff914") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.459977 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" podUID="9247c01e-fd0d-4fe6-8a9b-f50dec002cac" Jan 21 13:26:10 crc kubenswrapper[4959]: E0121 13:26:10.459984 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" podUID="b5d1151c-e9f0-4bc3-b0da-b3df5470a149" Jan 21 13:26:13 crc kubenswrapper[4959]: I0121 13:26:13.339170 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:13 crc kubenswrapper[4959]: E0121 13:26:13.339621 4959 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:13 crc kubenswrapper[4959]: E0121 13:26:13.340132 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert podName:dd86c02d-b4ab-42e5-9a16-a968c0aeba96 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:21.340109676 +0000 UTC m=+1042.303140409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert") pod "infra-operator-controller-manager-77c48c7859-96qjh" (UID: "dd86c02d-b4ab-42e5-9a16-a968c0aeba96") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:14 crc kubenswrapper[4959]: I0121 13:26:14.470231 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:14 crc kubenswrapper[4959]: I0121 13:26:14.470296 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:14 crc kubenswrapper[4959]: I0121 13:26:14.470429 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:14 crc kubenswrapper[4959]: E0121 13:26:14.470696 4959 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:14 crc kubenswrapper[4959]: E0121 13:26:14.470768 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert podName:db113188-8b44-43d6-8e79-8231fbfff914 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:22.470746745 +0000 UTC m=+1043.433777298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" (UID: "db113188-8b44-43d6-8e79-8231fbfff914") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:26:14 crc kubenswrapper[4959]: E0121 13:26:14.471213 4959 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:26:14 crc kubenswrapper[4959]: E0121 13:26:14.471359 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:22.471322821 +0000 UTC m=+1043.434353524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "metrics-server-cert" not found Jan 21 13:26:14 crc kubenswrapper[4959]: E0121 13:26:14.471408 4959 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:26:14 crc kubenswrapper[4959]: E0121 13:26:14.471452 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:22.471438784 +0000 UTC m=+1043.434469487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "webhook-server-cert" not found Jan 21 13:26:21 crc kubenswrapper[4959]: I0121 13:26:21.427540 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:21 crc kubenswrapper[4959]: E0121 13:26:21.428111 4959 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:21 crc kubenswrapper[4959]: E0121 13:26:21.428168 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert podName:dd86c02d-b4ab-42e5-9a16-a968c0aeba96 nodeName:}" failed. No retries permitted until 2026-01-21 13:26:37.428148769 +0000 UTC m=+1058.391179312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert") pod "infra-operator-controller-manager-77c48c7859-96qjh" (UID: "dd86c02d-b4ab-42e5-9a16-a968c0aeba96") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:26:22 crc kubenswrapper[4959]: I0121 13:26:22.541831 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:22 crc kubenswrapper[4959]: I0121 13:26:22.541950 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:22 crc kubenswrapper[4959]: I0121 13:26:22.542000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:22 crc kubenswrapper[4959]: E0121 13:26:22.542341 4959 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:26:22 crc kubenswrapper[4959]: E0121 13:26:22.542404 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:38.542389314 +0000 UTC m=+1059.505419857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "metrics-server-cert" not found Jan 21 13:26:22 crc kubenswrapper[4959]: E0121 13:26:22.542463 4959 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:26:22 crc kubenswrapper[4959]: E0121 13:26:22.542609 4959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs podName:c8660b47-58d0-48c2-8359-ec471c30158a nodeName:}" failed. No retries permitted until 2026-01-21 13:26:38.54258468 +0000 UTC m=+1059.505615253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs") pod "openstack-operator-controller-manager-5bd5c98d7d-k5z9b" (UID: "c8660b47-58d0-48c2-8359-ec471c30158a") : secret "webhook-server-cert" not found Jan 21 13:26:22 crc kubenswrapper[4959]: I0121 13:26:22.549227 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db113188-8b44-43d6-8e79-8231fbfff914-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548fndp\" (UID: \"db113188-8b44-43d6-8e79-8231fbfff914\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:22 crc kubenswrapper[4959]: I0121 13:26:22.634219 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:27 crc kubenswrapper[4959]: E0121 13:26:27.803404 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 13:26:27 crc kubenswrapper[4959]: E0121 13:26:27.803869 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5ncf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-6wjxl_openstack-operators(2776361f-f7a5-452f-b847-f1370993200b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:27 crc kubenswrapper[4959]: E0121 13:26:27.805078 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" podUID="2776361f-f7a5-452f-b847-f1370993200b" Jan 21 13:26:28 crc kubenswrapper[4959]: E0121 13:26:28.619683 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 13:26:28 crc kubenswrapper[4959]: E0121 13:26:28.619866 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhfn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-g4bs8_openstack-operators(a6ef5ba7-019c-416f-9003-54c5ce70f01a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:28 crc kubenswrapper[4959]: E0121 13:26:28.621213 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" podUID="a6ef5ba7-019c-416f-9003-54c5ce70f01a" Jan 21 13:26:28 crc kubenswrapper[4959]: E0121 13:26:28.645944 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" podUID="2776361f-f7a5-452f-b847-f1370993200b" Jan 21 13:26:28 crc kubenswrapper[4959]: E0121 13:26:28.646381 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" podUID="a6ef5ba7-019c-416f-9003-54c5ce70f01a" Jan 21 13:26:29 crc kubenswrapper[4959]: E0121 13:26:29.997798 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 13:26:29 crc kubenswrapper[4959]: E0121 13:26:29.998277 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6tsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-qzds5_openstack-operators(061f7370-4309-4e68-97f3-f57e9832939b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:29 crc kubenswrapper[4959]: E0121 13:26:29.999632 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" podUID="061f7370-4309-4e68-97f3-f57e9832939b" Jan 21 13:26:30 crc kubenswrapper[4959]: E0121 13:26:30.723931 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" podUID="061f7370-4309-4e68-97f3-f57e9832939b" Jan 21 13:26:30 crc kubenswrapper[4959]: E0121 13:26:30.811421 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 13:26:30 crc kubenswrapper[4959]: E0121 13:26:30.811631 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lh64w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-kslqm_openstack-operators(3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:30 crc kubenswrapper[4959]: E0121 13:26:30.812768 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" podUID="3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d" Jan 21 13:26:31 crc kubenswrapper[4959]: E0121 13:26:31.705115 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 21 13:26:31 crc kubenswrapper[4959]: E0121 13:26:31.705316 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dchr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-hs86v_openstack-operators(082d43b2-0714-47d3-9f71-9d386e89b56f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:31 crc kubenswrapper[4959]: E0121 13:26:31.707440 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" podUID="082d43b2-0714-47d3-9f71-9d386e89b56f" Jan 21 13:26:31 crc kubenswrapper[4959]: E0121 13:26:31.728400 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" podUID="082d43b2-0714-47d3-9f71-9d386e89b56f" Jan 21 13:26:31 crc kubenswrapper[4959]: E0121 13:26:31.728905 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" podUID="3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d" Jan 21 13:26:32 crc kubenswrapper[4959]: E0121 13:26:32.933125 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 13:26:32 crc kubenswrapper[4959]: E0121 13:26:32.933548 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bc8ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-crgjd_openstack-operators(ae0b11f6-2763-4884-b37b-ec8dc6548a79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:32 crc kubenswrapper[4959]: E0121 13:26:32.934764 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" podUID="ae0b11f6-2763-4884-b37b-ec8dc6548a79" Jan 21 13:26:33 crc kubenswrapper[4959]: E0121 13:26:33.761829 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" podUID="ae0b11f6-2763-4884-b37b-ec8dc6548a79" Jan 21 13:26:33 crc kubenswrapper[4959]: E0121 13:26:33.902243 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 13:26:33 crc kubenswrapper[4959]: E0121 13:26:33.902417 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqmhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-dfzqp_openstack-operators(49ec4962-8c60-4bd2-9ada-8f25cc21baa4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:33 crc kubenswrapper[4959]: E0121 13:26:33.903599 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" podUID="49ec4962-8c60-4bd2-9ada-8f25cc21baa4" Jan 21 13:26:34 crc kubenswrapper[4959]: E0121 13:26:34.763695 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" podUID="49ec4962-8c60-4bd2-9ada-8f25cc21baa4" Jan 21 13:26:37 crc kubenswrapper[4959]: E0121 13:26:37.262800 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.83:5001/openstack-k8s-operators/manila-operator:e69744013a8995708533da12eae59128b725acae" Jan 21 13:26:37 crc kubenswrapper[4959]: E0121 13:26:37.263162 4959 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.83:5001/openstack-k8s-operators/manila-operator:e69744013a8995708533da12eae59128b725acae" Jan 21 13:26:37 crc kubenswrapper[4959]: E0121 13:26:37.263363 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.83:5001/openstack-k8s-operators/manila-operator:e69744013a8995708533da12eae59128b725acae,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qqjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-c5fd576c9-gkv5c_openstack-operators(fb0839da-0f44-43dd-a240-72c0f032f30a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:26:37 crc kubenswrapper[4959]: E0121 13:26:37.264553 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" podUID="fb0839da-0f44-43dd-a240-72c0f032f30a" Jan 21 13:26:37 crc kubenswrapper[4959]: I0121 13:26:37.508856 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:37 crc kubenswrapper[4959]: I0121 13:26:37.516980 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd86c02d-b4ab-42e5-9a16-a968c0aeba96-cert\") pod \"infra-operator-controller-manager-77c48c7859-96qjh\" (UID: \"dd86c02d-b4ab-42e5-9a16-a968c0aeba96\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:37 crc kubenswrapper[4959]: E0121 13:26:37.780708 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.83:5001/openstack-k8s-operators/manila-operator:e69744013a8995708533da12eae59128b725acae\\\"\"" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" podUID="fb0839da-0f44-43dd-a240-72c0f032f30a" Jan 21 13:26:37 crc kubenswrapper[4959]: I0121 13:26:37.815071 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:38 crc kubenswrapper[4959]: I0121 13:26:38.627823 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:38 crc kubenswrapper[4959]: I0121 13:26:38.627924 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:38 crc kubenswrapper[4959]: I0121 13:26:38.632514 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-metrics-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:38 crc kubenswrapper[4959]: I0121 13:26:38.634876 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c8660b47-58d0-48c2-8359-ec471c30158a-webhook-certs\") pod \"openstack-operator-controller-manager-5bd5c98d7d-k5z9b\" (UID: \"c8660b47-58d0-48c2-8359-ec471c30158a\") " pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:38 crc kubenswrapper[4959]: I0121 13:26:38.830135 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.333570 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp"] Jan 21 13:26:39 crc kubenswrapper[4959]: W0121 13:26:39.360299 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb113188_8b44_43d6_8e79_8231fbfff914.slice/crio-0450fc0d152a948475de34c39ad9f49c16ffd3b57127afb51de69db166a83d2a WatchSource:0}: Error finding container 0450fc0d152a948475de34c39ad9f49c16ffd3b57127afb51de69db166a83d2a: Status 404 returned error can't find the container with id 0450fc0d152a948475de34c39ad9f49c16ffd3b57127afb51de69db166a83d2a Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.592965 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh"] Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.790944 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" event={"ID":"988f7f11-664f-4f70-9b38-2852dd3b17a0","Type":"ContainerStarted","Data":"f26478e9e6e4da96479ea42d155097712ffdedaa68a5e71bcff9e15106cb7cc7"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.791022 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.792771 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" event={"ID":"9247c01e-fd0d-4fe6-8a9b-f50dec002cac","Type":"ContainerStarted","Data":"ca1d2504dd72ddf562ec23c008c5b94e80b6be5655b9ad19bc96a410dd033315"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.792955 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.849573 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" event={"ID":"ed8f3e55-7ed1-4794-8171-461cf3ebc132","Type":"ContainerStarted","Data":"f265b949d7613514201a702775f61cefb21d16473410aa2ef4b635edcf5fe46a"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.850395 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.854176 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" event={"ID":"b5d1151c-e9f0-4bc3-b0da-b3df5470a149","Type":"ContainerStarted","Data":"093d15c0c114619fd535345201701f95bfc37efb0e5adf1d161800d6853b28c4"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.857581 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" event={"ID":"a24ac487-ea43-40fd-b6ea-cd7740cf80ce","Type":"ContainerStarted","Data":"bb534b47164e8b1127d3ef1063fdc3ab07c8d1ca4540a28fc0245b67d07381f4"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.858251 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.859940 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" event={"ID":"da20d161-5e78-4c3d-a021-75244caefb16","Type":"ContainerStarted","Data":"21b7051f714887c30201e64eb2701d06ed1bc06030818f4159b0f35820337177"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.860272 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.861791 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" event={"ID":"cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7","Type":"ContainerStarted","Data":"3bb1ded401531450c983341b073b8982e103fc0423b6f930a11767844e5195b6"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.862621 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.885025 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" event={"ID":"1c5d42e4-5a3b-4cea-b0a7-3f334d801f22","Type":"ContainerStarted","Data":"8f5c1a994a900bb1f282c7129da4a86cfd3a2e30ea94525983418a137284ec70"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.885801 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.887841 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" event={"ID":"dd86c02d-b4ab-42e5-9a16-a968c0aeba96","Type":"ContainerStarted","Data":"4a5661d5979ae6919a74f1b444a503a1a00a79ef4deb886aab27a180f65d2c86"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.891751 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" event={"ID":"a588ba98-33be-46aa-a582-4403d3a09a95","Type":"ContainerStarted","Data":"9a3cccc140d7233093350146746e539781df0250c8e53651b887d44ad53cbfbd"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.892528 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.896571 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" event={"ID":"8075108b-d9e1-40d4-9e2e-4faa59061778","Type":"ContainerStarted","Data":"153d74a4f902b04650fedf18444f624fbdbdfc367dd5289288e74f3a9972cbf7"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.896754 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.902501 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" event={"ID":"db113188-8b44-43d6-8e79-8231fbfff914","Type":"ContainerStarted","Data":"0450fc0d152a948475de34c39ad9f49c16ffd3b57127afb51de69db166a83d2a"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.904919 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" event={"ID":"747460d1-12de-4c88-b0d8-879ff7b62834","Type":"ContainerStarted","Data":"583bb91d1d3dd837167afcad0d1ddbabc5ebed8ba8752f8060c4711639baf7e1"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.905708 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.910075 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" event={"ID":"d3753491-e2ab-4cf4-b8be-7de464734343","Type":"ContainerStarted","Data":"7a535546567d9ebc45e1985d60d335aab8b45482d61740dfd26c046ae25def06"} Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.910840 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.916877 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b"] Jan 21 13:26:39 crc kubenswrapper[4959]: I0121 13:26:39.938586 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" podStartSLOduration=9.492461343 podStartE2EDuration="34.938566462s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.462306678 +0000 UTC m=+1028.425337221" lastFinishedPulling="2026-01-21 13:26:32.908411797 +0000 UTC m=+1053.871442340" observedRunningTime="2026-01-21 13:26:39.93667972 +0000 UTC m=+1060.899710263" watchObservedRunningTime="2026-01-21 13:26:39.938566462 +0000 UTC m=+1060.901596995" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.073931 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" podStartSLOduration=7.900208139 podStartE2EDuration="35.073909298s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.210520807 +0000 UTC m=+1028.173551350" lastFinishedPulling="2026-01-21 13:26:34.384221966 +0000 UTC m=+1055.347252509" observedRunningTime="2026-01-21 13:26:40.060667639 +0000 UTC m=+1061.023698182" watchObservedRunningTime="2026-01-21 13:26:40.073909298 +0000 UTC m=+1061.036939841" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.102448 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" podStartSLOduration=4.409075056 podStartE2EDuration="35.10242682s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.199255912 +0000 UTC m=+1029.162286455" lastFinishedPulling="2026-01-21 13:26:38.892607676 +0000 UTC m=+1059.855638219" observedRunningTime="2026-01-21 13:26:40.10203091 +0000 UTC m=+1061.065061483" watchObservedRunningTime="2026-01-21 13:26:40.10242682 +0000 UTC m=+1061.065457363" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.132701 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" podStartSLOduration=11.095064247 podStartE2EDuration="35.13267899s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.662926723 +0000 UTC m=+1028.625957266" lastFinishedPulling="2026-01-21 13:26:31.700541466 +0000 UTC m=+1052.663572009" observedRunningTime="2026-01-21 13:26:40.129384891 +0000 UTC m=+1061.092415454" watchObservedRunningTime="2026-01-21 13:26:40.13267899 +0000 UTC m=+1061.095709543" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.158033 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fp76n" podStartSLOduration=3.194427391 podStartE2EDuration="34.158013826s" podCreationTimestamp="2026-01-21 13:26:06 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.38891248 +0000 UTC m=+1029.351943023" lastFinishedPulling="2026-01-21 13:26:39.352498915 +0000 UTC m=+1060.315529458" observedRunningTime="2026-01-21 13:26:40.155013355 +0000 UTC m=+1061.118043908" watchObservedRunningTime="2026-01-21 13:26:40.158013826 +0000 UTC m=+1061.121044369" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.248826 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" podStartSLOduration=4.533262472 podStartE2EDuration="35.248805276s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.190402823 +0000 UTC m=+1029.153433366" lastFinishedPulling="2026-01-21 13:26:38.905945627 +0000 UTC m=+1059.868976170" observedRunningTime="2026-01-21 13:26:40.225160135 +0000 UTC m=+1061.188190688" watchObservedRunningTime="2026-01-21 13:26:40.248805276 +0000 UTC m=+1061.211835819" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.251495 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" podStartSLOduration=14.462268173 podStartE2EDuration="35.251480548s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.818819466 +0000 UTC m=+1028.781850009" lastFinishedPulling="2026-01-21 13:26:28.608031841 +0000 UTC m=+1049.571062384" observedRunningTime="2026-01-21 13:26:40.248147048 +0000 UTC m=+1061.211177591" watchObservedRunningTime="2026-01-21 13:26:40.251480548 +0000 UTC m=+1061.214511101" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.270538 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" podStartSLOduration=21.551480328 podStartE2EDuration="35.270520464s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.210525297 +0000 UTC m=+1028.173555840" lastFinishedPulling="2026-01-21 13:26:20.929565433 +0000 UTC m=+1041.892595976" observedRunningTime="2026-01-21 13:26:40.266888705 +0000 UTC m=+1061.229919278" watchObservedRunningTime="2026-01-21 13:26:40.270520464 +0000 UTC m=+1061.233551007" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.325038 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" podStartSLOduration=5.358541736 podStartE2EDuration="34.32502177s" podCreationTimestamp="2026-01-21 13:26:06 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.190385372 +0000 UTC m=+1029.153415925" lastFinishedPulling="2026-01-21 13:26:37.156865416 +0000 UTC m=+1058.119895959" observedRunningTime="2026-01-21 13:26:40.318836003 +0000 UTC m=+1061.281866546" watchObservedRunningTime="2026-01-21 13:26:40.32502177 +0000 UTC m=+1061.288052303" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.442080 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" podStartSLOduration=13.66074026 podStartE2EDuration="35.442062851s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:06.826937896 +0000 UTC m=+1027.789968439" lastFinishedPulling="2026-01-21 13:26:28.608260487 +0000 UTC m=+1049.571291030" observedRunningTime="2026-01-21 13:26:40.439501871 +0000 UTC m=+1061.402532424" watchObservedRunningTime="2026-01-21 13:26:40.442062851 +0000 UTC m=+1061.405093394" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.485028 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" podStartSLOduration=8.993989829 podStartE2EDuration="35.485004794s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.349561544 +0000 UTC m=+1028.312592087" lastFinishedPulling="2026-01-21 13:26:33.840576509 +0000 UTC m=+1054.803607052" observedRunningTime="2026-01-21 13:26:40.461681792 +0000 UTC m=+1061.424712355" watchObservedRunningTime="2026-01-21 13:26:40.485004794 +0000 UTC m=+1061.448035337" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.510393 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" podStartSLOduration=3.88609732 podStartE2EDuration="34.510373432s" podCreationTimestamp="2026-01-21 13:26:06 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.268330724 +0000 UTC m=+1029.231361257" lastFinishedPulling="2026-01-21 13:26:38.892606826 +0000 UTC m=+1059.855637369" observedRunningTime="2026-01-21 13:26:40.507440982 +0000 UTC m=+1061.470471525" watchObservedRunningTime="2026-01-21 13:26:40.510373432 +0000 UTC m=+1061.473403975" Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.921683 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" event={"ID":"c8660b47-58d0-48c2-8359-ec471c30158a","Type":"ContainerStarted","Data":"c4e50b24de6175c3b78fb0adf8c5b24b545650a87cb9e8db4641fd7c6628ff73"} Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.921736 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" event={"ID":"c8660b47-58d0-48c2-8359-ec471c30158a","Type":"ContainerStarted","Data":"404fb0e8db8d5f4f108e336102b3b55cf83010ca553f8272400a7a1b49cf7854"} Jan 21 13:26:40 crc kubenswrapper[4959]: I0121 13:26:40.924641 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" event={"ID":"2776361f-f7a5-452f-b847-f1370993200b","Type":"ContainerStarted","Data":"69404741a9cb92891cd0c94c7cb9f8a1219e8aa89bd3e1939794c36bfe054d36"} Jan 21 13:26:41 crc kubenswrapper[4959]: I0121 13:26:41.942831 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:41 crc kubenswrapper[4959]: I0121 13:26:41.976204 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" podStartSLOduration=4.970584677 podStartE2EDuration="36.9761846s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.176610999 +0000 UTC m=+1029.139641542" lastFinishedPulling="2026-01-21 13:26:40.182210922 +0000 UTC m=+1061.145241465" observedRunningTime="2026-01-21 13:26:41.962240532 +0000 UTC m=+1062.925271075" watchObservedRunningTime="2026-01-21 13:26:41.9761846 +0000 UTC m=+1062.939215143" Jan 21 13:26:42 crc kubenswrapper[4959]: I0121 13:26:42.003774 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" podStartSLOduration=36.003751967 podStartE2EDuration="36.003751967s" podCreationTimestamp="2026-01-21 13:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:41.997278181 +0000 UTC m=+1062.960308724" watchObservedRunningTime="2026-01-21 13:26:42.003751967 +0000 UTC m=+1062.966782510" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.468575 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-f49mc" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.518434 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-d69ql" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.551356 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-64kwb" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.589644 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.602698 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-n54x8" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.709670 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-fzq84" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.875227 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-5t88r" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.969490 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" event={"ID":"db113188-8b44-43d6-8e79-8231fbfff914","Type":"ContainerStarted","Data":"99ba9b8d554fbc461a4a568a57e4ce1732e489a3ac6ea148a3841e58701f592a"} Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.969584 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.971656 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" event={"ID":"a6ef5ba7-019c-416f-9003-54c5ce70f01a","Type":"ContainerStarted","Data":"bf1b909704f3efaffa645206bd1fa832f229ab29d927032acd75285c53b416d9"} Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.971872 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.976266 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" event={"ID":"dd86c02d-b4ab-42e5-9a16-a968c0aeba96","Type":"ContainerStarted","Data":"ab720b9f49c053d53ee8a0f7d4cd6050144c4f851e77926a57c3b77db29afe07"} Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.976360 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:26:45 crc kubenswrapper[4959]: I0121 13:26:45.981575 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-h4c6v" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.012907 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" podStartSLOduration=35.051652944 podStartE2EDuration="41.012881492s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:39.392897829 +0000 UTC m=+1060.355928372" lastFinishedPulling="2026-01-21 13:26:45.354126377 +0000 UTC m=+1066.317156920" observedRunningTime="2026-01-21 13:26:46.00096663 +0000 UTC m=+1066.963997183" watchObservedRunningTime="2026-01-21 13:26:46.012881492 +0000 UTC m=+1066.975912035" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.024382 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" podStartSLOduration=35.266182966 podStartE2EDuration="41.024364163s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:39.604634675 +0000 UTC m=+1060.567665218" lastFinishedPulling="2026-01-21 13:26:45.362815872 +0000 UTC m=+1066.325846415" observedRunningTime="2026-01-21 13:26:46.023277434 +0000 UTC m=+1066.986307997" watchObservedRunningTime="2026-01-21 13:26:46.024364163 +0000 UTC m=+1066.987394706" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.054249 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" podStartSLOduration=3.527002951 podStartE2EDuration="41.054228192s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.828516979 +0000 UTC m=+1028.791547522" lastFinishedPulling="2026-01-21 13:26:45.35574222 +0000 UTC m=+1066.318772763" observedRunningTime="2026-01-21 13:26:46.048135157 +0000 UTC m=+1067.011165710" watchObservedRunningTime="2026-01-21 13:26:46.054228192 +0000 UTC m=+1067.017258735" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.169123 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6lsrf" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.423355 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-6wjxl" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.575408 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-pcxjt" Jan 21 13:26:46 crc kubenswrapper[4959]: I0121 13:26:46.705906 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6jp8j" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:46.991191 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" event={"ID":"082d43b2-0714-47d3-9f71-9d386e89b56f","Type":"ContainerStarted","Data":"3123fd704c10c6da06bc2e3607997d5c98f5cc8caa58d144fc13a85c6995ed02"} Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:46.992135 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:46.996755 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" event={"ID":"061f7370-4309-4e68-97f3-f57e9832939b","Type":"ContainerStarted","Data":"88d2c89a095f45b6d92354ae0d37300a6aef3a900fb99704be54915d55940a51"} Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:46.998059 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.015192 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" event={"ID":"3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d","Type":"ContainerStarted","Data":"e1b942f8622fe1d184ee72e5ec5a1994a5e4627dbdfb10a619187e70cf667078"} Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.015577 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.020197 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" event={"ID":"49ec4962-8c60-4bd2-9ada-8f25cc21baa4","Type":"ContainerStarted","Data":"ad76dcf47762ad67f790bc6ac0d36842d7b02065a2292a5d459c3029921dbdf2"} Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.021058 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.042691 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" podStartSLOduration=3.375394794 podStartE2EDuration="42.042673109s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.112148893 +0000 UTC m=+1029.075179436" lastFinishedPulling="2026-01-21 13:26:46.779427208 +0000 UTC m=+1067.742457751" observedRunningTime="2026-01-21 13:26:47.022421651 +0000 UTC m=+1067.985452194" watchObservedRunningTime="2026-01-21 13:26:47.042673109 +0000 UTC m=+1068.005703652" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.043308 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" podStartSLOduration=4.45390591 podStartE2EDuration="42.043302326s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.249612136 +0000 UTC m=+1029.212642679" lastFinishedPulling="2026-01-21 13:26:45.839008552 +0000 UTC m=+1066.802039095" observedRunningTime="2026-01-21 13:26:47.036140472 +0000 UTC m=+1067.999171025" watchObservedRunningTime="2026-01-21 13:26:47.043302326 +0000 UTC m=+1068.006332869" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.055970 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" podStartSLOduration=4.343632533 podStartE2EDuration="42.055949159s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.125607407 +0000 UTC m=+1029.088637960" lastFinishedPulling="2026-01-21 13:26:45.837924033 +0000 UTC m=+1066.800954586" observedRunningTime="2026-01-21 13:26:47.050032069 +0000 UTC m=+1068.013062612" watchObservedRunningTime="2026-01-21 13:26:47.055949159 +0000 UTC m=+1068.018979702" Jan 21 13:26:47 crc kubenswrapper[4959]: I0121 13:26:47.075165 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" podStartSLOduration=2.960734591 podStartE2EDuration="42.075142149s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.66466535 +0000 UTC m=+1028.627695893" lastFinishedPulling="2026-01-21 13:26:46.779072908 +0000 UTC m=+1067.742103451" observedRunningTime="2026-01-21 13:26:47.072763664 +0000 UTC m=+1068.035794217" watchObservedRunningTime="2026-01-21 13:26:47.075142149 +0000 UTC m=+1068.038172692" Jan 21 13:26:48 crc kubenswrapper[4959]: I0121 13:26:48.830711 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:48 crc kubenswrapper[4959]: I0121 13:26:48.839495 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5bd5c98d7d-k5z9b" Jan 21 13:26:50 crc kubenswrapper[4959]: I0121 13:26:50.053089 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" event={"ID":"ae0b11f6-2763-4884-b37b-ec8dc6548a79","Type":"ContainerStarted","Data":"4490b51bc98e76f18e607e3726d34feb761a775ad082c046dc316cedb851e051"} Jan 21 13:26:50 crc kubenswrapper[4959]: I0121 13:26:50.054484 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:50 crc kubenswrapper[4959]: I0121 13:26:50.073198 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" podStartSLOduration=3.905677619 podStartE2EDuration="45.073179945s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:07.832298981 +0000 UTC m=+1028.795329524" lastFinishedPulling="2026-01-21 13:26:48.999801307 +0000 UTC m=+1069.962831850" observedRunningTime="2026-01-21 13:26:50.069513056 +0000 UTC m=+1071.032543619" watchObservedRunningTime="2026-01-21 13:26:50.073179945 +0000 UTC m=+1071.036210488" Jan 21 13:26:51 crc kubenswrapper[4959]: I0121 13:26:51.379623 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:26:51 crc kubenswrapper[4959]: I0121 13:26:51.379924 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:26:52 crc kubenswrapper[4959]: I0121 13:26:52.066126 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" event={"ID":"fb0839da-0f44-43dd-a240-72c0f032f30a","Type":"ContainerStarted","Data":"d6fd845feafaf960713e407c9de13ad65f50d958b0aabbac6e8b680c027b4151"} Jan 21 13:26:52 crc kubenswrapper[4959]: I0121 13:26:52.066425 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:52 crc kubenswrapper[4959]: I0121 13:26:52.083713 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" podStartSLOduration=3.922929097 podStartE2EDuration="47.08369517s" podCreationTimestamp="2026-01-21 13:26:05 +0000 UTC" firstStartedPulling="2026-01-21 13:26:08.190384772 +0000 UTC m=+1029.153415315" lastFinishedPulling="2026-01-21 13:26:51.351150845 +0000 UTC m=+1072.314181388" observedRunningTime="2026-01-21 13:26:52.082020234 +0000 UTC m=+1073.045050777" watchObservedRunningTime="2026-01-21 13:26:52.08369517 +0000 UTC m=+1073.046725713" Jan 21 13:26:52 crc kubenswrapper[4959]: I0121 13:26:52.638928 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548fndp" Jan 21 13:26:55 crc kubenswrapper[4959]: I0121 13:26:55.675481 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-g4bs8" Jan 21 13:26:55 crc kubenswrapper[4959]: I0121 13:26:55.741481 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-dfzqp" Jan 21 13:26:55 crc kubenswrapper[4959]: I0121 13:26:55.977293 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-crgjd" Jan 21 13:26:56 crc kubenswrapper[4959]: I0121 13:26:56.169827 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kslqm" Jan 21 13:26:56 crc kubenswrapper[4959]: I0121 13:26:56.362799 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-c5fd576c9-gkv5c" Jan 21 13:26:56 crc kubenswrapper[4959]: I0121 13:26:56.474077 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-hs86v" Jan 21 13:26:56 crc kubenswrapper[4959]: I0121 13:26:56.509812 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-qzds5" Jan 21 13:26:57 crc kubenswrapper[4959]: I0121 13:26:57.820996 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-96qjh" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.129830 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bzxmw"] Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.135212 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.135632 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bzxmw"] Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.138531 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.138543 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.138737 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lqh8k" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.140534 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.227829 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kb7kb"] Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.229201 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.232276 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.252258 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kb7kb"] Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.262951 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dk6z\" (UniqueName: \"kubernetes.io/projected/c7fa8803-84a4-4696-9e23-91f4998b99e7-kube-api-access-7dk6z\") pod \"dnsmasq-dns-675f4bcbfc-bzxmw\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.263014 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fa8803-84a4-4696-9e23-91f4998b99e7-config\") pod \"dnsmasq-dns-675f4bcbfc-bzxmw\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.364279 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dk6z\" (UniqueName: \"kubernetes.io/projected/c7fa8803-84a4-4696-9e23-91f4998b99e7-kube-api-access-7dk6z\") pod \"dnsmasq-dns-675f4bcbfc-bzxmw\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.364492 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7v58\" (UniqueName: \"kubernetes.io/projected/05ffc9b9-e97c-46f9-aa06-a933251e3c25-kube-api-access-l7v58\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.364551 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.364650 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fa8803-84a4-4696-9e23-91f4998b99e7-config\") pod \"dnsmasq-dns-675f4bcbfc-bzxmw\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.364711 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-config\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.365731 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fa8803-84a4-4696-9e23-91f4998b99e7-config\") pod \"dnsmasq-dns-675f4bcbfc-bzxmw\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.382950 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dk6z\" (UniqueName: \"kubernetes.io/projected/c7fa8803-84a4-4696-9e23-91f4998b99e7-kube-api-access-7dk6z\") pod \"dnsmasq-dns-675f4bcbfc-bzxmw\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.455272 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.465565 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7v58\" (UniqueName: \"kubernetes.io/projected/05ffc9b9-e97c-46f9-aa06-a933251e3c25-kube-api-access-l7v58\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.465629 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.465666 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-config\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.466674 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.467112 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-config\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.484994 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7v58\" (UniqueName: \"kubernetes.io/projected/05ffc9b9-e97c-46f9-aa06-a933251e3c25-kube-api-access-l7v58\") pod \"dnsmasq-dns-78dd6ddcc-kb7kb\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.542408 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:27:13 crc kubenswrapper[4959]: I0121 13:27:13.923448 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bzxmw"] Jan 21 13:27:14 crc kubenswrapper[4959]: I0121 13:27:14.096736 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kb7kb"] Jan 21 13:27:14 crc kubenswrapper[4959]: W0121 13:27:14.098654 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05ffc9b9_e97c_46f9_aa06_a933251e3c25.slice/crio-1f08fd18d79d2572c0b762ea1557375636c086e4c65fc158aee012724497c0a5 WatchSource:0}: Error finding container 1f08fd18d79d2572c0b762ea1557375636c086e4c65fc158aee012724497c0a5: Status 404 returned error can't find the container with id 1f08fd18d79d2572c0b762ea1557375636c086e4c65fc158aee012724497c0a5 Jan 21 13:27:14 crc kubenswrapper[4959]: I0121 13:27:14.212714 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" event={"ID":"c7fa8803-84a4-4696-9e23-91f4998b99e7","Type":"ContainerStarted","Data":"c102c0f23ae0b99a1f0c41c1c0178136b99e044ffdbb49191869e3b278aa9bd2"} Jan 21 13:27:14 crc kubenswrapper[4959]: I0121 13:27:14.213784 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" event={"ID":"05ffc9b9-e97c-46f9-aa06-a933251e3c25","Type":"ContainerStarted","Data":"1f08fd18d79d2572c0b762ea1557375636c086e4c65fc158aee012724497c0a5"} Jan 21 13:27:15 crc kubenswrapper[4959]: I0121 13:27:15.919226 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bzxmw"] Jan 21 13:27:15 crc kubenswrapper[4959]: I0121 13:27:15.941019 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dxqds"] Jan 21 13:27:15 crc kubenswrapper[4959]: I0121 13:27:15.942061 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:15 crc kubenswrapper[4959]: I0121 13:27:15.959069 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dxqds"] Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.105133 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhd2b\" (UniqueName: \"kubernetes.io/projected/ab3e855e-ef21-4e3e-a048-91aaef598fde-kube-api-access-lhd2b\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.105259 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.105304 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-config\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.206245 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhd2b\" (UniqueName: \"kubernetes.io/projected/ab3e855e-ef21-4e3e-a048-91aaef598fde-kube-api-access-lhd2b\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.206346 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.206389 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-config\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.207442 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.207485 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-config\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.226688 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhd2b\" (UniqueName: \"kubernetes.io/projected/ab3e855e-ef21-4e3e-a048-91aaef598fde-kube-api-access-lhd2b\") pod \"dnsmasq-dns-666b6646f7-dxqds\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.246725 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kb7kb"] Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.271249 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-859fg"] Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.272441 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.273994 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.282128 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-859fg"] Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.410339 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-config\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.410388 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.410420 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtl52\" (UniqueName: \"kubernetes.io/projected/0ec5da5a-1451-45bb-867a-0edd91c811b2-kube-api-access-qtl52\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.511469 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-config\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.511729 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.511750 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtl52\" (UniqueName: \"kubernetes.io/projected/0ec5da5a-1451-45bb-867a-0edd91c811b2-kube-api-access-qtl52\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.512458 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-config\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.512491 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.531746 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtl52\" (UniqueName: \"kubernetes.io/projected/0ec5da5a-1451-45bb-867a-0edd91c811b2-kube-api-access-qtl52\") pod \"dnsmasq-dns-57d769cc4f-859fg\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.667713 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:27:16 crc kubenswrapper[4959]: I0121 13:27:16.825822 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dxqds"] Jan 21 13:27:16 crc kubenswrapper[4959]: W0121 13:27:16.843234 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab3e855e_ef21_4e3e_a048_91aaef598fde.slice/crio-6f456e2bb33deee26aaddafd535fcf44644bb00a29bf1a467057b523583fd36d WatchSource:0}: Error finding container 6f456e2bb33deee26aaddafd535fcf44644bb00a29bf1a467057b523583fd36d: Status 404 returned error can't find the container with id 6f456e2bb33deee26aaddafd535fcf44644bb00a29bf1a467057b523583fd36d Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.224604 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-859fg"] Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.237778 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" event={"ID":"ab3e855e-ef21-4e3e-a048-91aaef598fde","Type":"ContainerStarted","Data":"6f456e2bb33deee26aaddafd535fcf44644bb00a29bf1a467057b523583fd36d"} Jan 21 13:27:17 crc kubenswrapper[4959]: W0121 13:27:17.253477 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ec5da5a_1451_45bb_867a_0edd91c811b2.slice/crio-bcf43bcbf3f1d6ea08591533640426bcb5ea6140510201ebd898ac8dd251b635 WatchSource:0}: Error finding container bcf43bcbf3f1d6ea08591533640426bcb5ea6140510201ebd898ac8dd251b635: Status 404 returned error can't find the container with id bcf43bcbf3f1d6ea08591533640426bcb5ea6140510201ebd898ac8dd251b635 Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.775006 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.776739 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.779522 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-45n44" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.780600 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.780640 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.780770 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.782957 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.784450 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.785649 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.785753 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.785801 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.787796 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.787901 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.788118 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.788330 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.789913 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.795144 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.795563 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.796442 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qrtdw" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.809812 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946443 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946486 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946514 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946539 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946553 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946567 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946587 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946602 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.946675 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947754 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdb7\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-kube-api-access-8zdb7\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947790 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947816 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f613f3-9dc0-438c-8232-190c680ab312-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947835 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947850 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947869 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947900 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947922 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947951 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtn9\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-kube-api-access-6mtn9\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.947982 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.948007 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f613f3-9dc0-438c-8232-190c680ab312-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.948049 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:17 crc kubenswrapper[4959]: I0121 13:27:17.948069 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.145511 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.145668 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.145699 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.145829 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.145855 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.145984 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146019 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146130 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146149 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146453 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146492 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdb7\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-kube-api-access-8zdb7\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146730 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.146763 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f613f3-9dc0-438c-8232-190c680ab312-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147216 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147240 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147280 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147304 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147321 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147380 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mtn9\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-kube-api-access-6mtn9\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147409 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.147468 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f613f3-9dc0-438c-8232-190c680ab312-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.170475 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f613f3-9dc0-438c-8232-190c680ab312-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.171729 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.174133 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.175282 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.175533 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.176815 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.178990 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.185880 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.190480 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.191219 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.191464 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.191562 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.205837 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.206597 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.206722 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f613f3-9dc0-438c-8232-190c680ab312-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.207086 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.213593 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.214444 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.222338 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.222827 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdb7\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-kube-api-access-8zdb7\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.224611 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.228470 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.228802 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.248027 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mtn9\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-kube-api-access-6mtn9\") pod \"rabbitmq-cell1-server-0\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.265520 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" event={"ID":"0ec5da5a-1451-45bb-867a-0edd91c811b2","Type":"ContainerStarted","Data":"bcf43bcbf3f1d6ea08591533640426bcb5ea6140510201ebd898ac8dd251b635"} Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.413176 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.423558 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.659243 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.663924 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.670513 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fb2dn" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.670824 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.683459 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.684191 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.711273 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.780625 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896025 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d467040c-ef01-4a64-9d0e-bce50426c248-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896399 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-kolla-config\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896449 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rkv\" (UniqueName: \"kubernetes.io/projected/d467040c-ef01-4a64-9d0e-bce50426c248-kube-api-access-64rkv\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896494 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896519 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d467040c-ef01-4a64-9d0e-bce50426c248-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896541 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896591 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d467040c-ef01-4a64-9d0e-bce50426c248-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:18 crc kubenswrapper[4959]: I0121 13:27:18.896817 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-config-data-default\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041598 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d467040c-ef01-4a64-9d0e-bce50426c248-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041647 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-kolla-config\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041680 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rkv\" (UniqueName: \"kubernetes.io/projected/d467040c-ef01-4a64-9d0e-bce50426c248-kube-api-access-64rkv\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041708 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041731 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d467040c-ef01-4a64-9d0e-bce50426c248-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041752 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.041795 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d467040c-ef01-4a64-9d0e-bce50426c248-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.042492 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d467040c-ef01-4a64-9d0e-bce50426c248-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.042646 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.043457 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-config-data-default\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.044330 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-kolla-config\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.044837 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-config-data-default\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.045215 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d467040c-ef01-4a64-9d0e-bce50426c248-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.058511 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d467040c-ef01-4a64-9d0e-bce50426c248-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.062612 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d467040c-ef01-4a64-9d0e-bce50426c248-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.076260 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rkv\" (UniqueName: \"kubernetes.io/projected/d467040c-ef01-4a64-9d0e-bce50426c248-kube-api-access-64rkv\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.216324 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d467040c-ef01-4a64-9d0e-bce50426c248\") " pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.302277 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.435401 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:19 crc kubenswrapper[4959]: W0121 13:27:19.495156 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3273a9_7ce3_48ea_9546_ecb560a2d6b2.slice/crio-556474c70bc1657b5687c1bd6a12c799aa82528a920da5559f8fe194550ee83d WatchSource:0}: Error finding container 556474c70bc1657b5687c1bd6a12c799aa82528a920da5559f8fe194550ee83d: Status 404 returned error can't find the container with id 556474c70bc1657b5687c1bd6a12c799aa82528a920da5559f8fe194550ee83d Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.727818 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:19 crc kubenswrapper[4959]: W0121 13:27:19.845314 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56f613f3_9dc0_438c_8232_190c680ab312.slice/crio-f47892ae61264a84b4c54e8ad502945afacf80c6f9d88d6b7a7e6a3e8bbc90f1 WatchSource:0}: Error finding container f47892ae61264a84b4c54e8ad502945afacf80c6f9d88d6b7a7e6a3e8bbc90f1: Status 404 returned error can't find the container with id f47892ae61264a84b4c54e8ad502945afacf80c6f9d88d6b7a7e6a3e8bbc90f1 Jan 21 13:27:19 crc kubenswrapper[4959]: I0121 13:27:19.985486 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 13:27:20 crc kubenswrapper[4959]: W0121 13:27:20.020746 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd467040c_ef01_4a64_9d0e_bce50426c248.slice/crio-55f5711198f1780310b0d47e06743ef594c822638c2299235f9b085cdfafe196 WatchSource:0}: Error finding container 55f5711198f1780310b0d47e06743ef594c822638c2299235f9b085cdfafe196: Status 404 returned error can't find the container with id 55f5711198f1780310b0d47e06743ef594c822638c2299235f9b085cdfafe196 Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.368647 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.370405 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.374135 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.374294 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8dwm2" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.374429 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.380329 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.425294 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56f613f3-9dc0-438c-8232-190c680ab312","Type":"ContainerStarted","Data":"f47892ae61264a84b4c54e8ad502945afacf80c6f9d88d6b7a7e6a3e8bbc90f1"} Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.433699 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d467040c-ef01-4a64-9d0e-bce50426c248","Type":"ContainerStarted","Data":"55f5711198f1780310b0d47e06743ef594c822638c2299235f9b085cdfafe196"} Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.441330 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2","Type":"ContainerStarted","Data":"556474c70bc1657b5687c1bd6a12c799aa82528a920da5559f8fe194550ee83d"} Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.478215 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.479750 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.488584 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dfctw" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.488768 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.488869 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.489464 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.497608 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.539115 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.539183 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tgl\" (UniqueName: \"kubernetes.io/projected/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-kube-api-access-46tgl\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.539294 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.539338 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-config-data\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.539415 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-kolla-config\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642287 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642348 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642378 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46tgl\" (UniqueName: \"kubernetes.io/projected/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-kube-api-access-46tgl\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642399 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642425 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642456 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642486 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-config-data\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642509 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642528 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642558 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4q48\" (UniqueName: \"kubernetes.io/projected/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-kube-api-access-b4q48\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642579 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642601 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-kolla-config\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.642616 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.644396 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-config-data\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.644962 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-kolla-config\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.661269 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.680424 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.683194 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46tgl\" (UniqueName: \"kubernetes.io/projected/bedefb76-bb9d-46cf-87e9-f8001ff9ce64-kube-api-access-46tgl\") pod \"memcached-0\" (UID: \"bedefb76-bb9d-46cf-87e9-f8001ff9ce64\") " pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.708975 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.755554 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.755907 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.755963 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.755996 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.756025 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.756057 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4q48\" (UniqueName: \"kubernetes.io/projected/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-kube-api-access-b4q48\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.756087 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.756154 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.760961 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.762336 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.792737 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.794413 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.794776 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.809119 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.823441 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.824873 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:20 crc kubenswrapper[4959]: I0121 13:27:20.827829 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4q48\" (UniqueName: \"kubernetes.io/projected/60b98eb7-0886-4619-afde-c4fb7c5ad7c4-kube-api-access-b4q48\") pod \"openstack-cell1-galera-0\" (UID: \"60b98eb7-0886-4619-afde-c4fb7c5ad7c4\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:21 crc kubenswrapper[4959]: I0121 13:27:21.113321 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 13:27:21 crc kubenswrapper[4959]: I0121 13:27:21.410896 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:27:21 crc kubenswrapper[4959]: I0121 13:27:21.410951 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:27:21 crc kubenswrapper[4959]: I0121 13:27:21.636550 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 13:27:21 crc kubenswrapper[4959]: I0121 13:27:21.666621 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 13:27:21 crc kubenswrapper[4959]: W0121 13:27:21.682795 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60b98eb7_0886_4619_afde_c4fb7c5ad7c4.slice/crio-0e68187aa884b6233d6bed723e8f4b82bb534714e077454eb4afe4eb6ce60851 WatchSource:0}: Error finding container 0e68187aa884b6233d6bed723e8f4b82bb534714e077454eb4afe4eb6ce60851: Status 404 returned error can't find the container with id 0e68187aa884b6233d6bed723e8f4b82bb534714e077454eb4afe4eb6ce60851 Jan 21 13:27:21 crc kubenswrapper[4959]: W0121 13:27:21.702355 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbedefb76_bb9d_46cf_87e9_f8001ff9ce64.slice/crio-7c9aabfb3eb84957b5bd9a27cce31e412a385cb30d6c8fec6f927ce0b36653ee WatchSource:0}: Error finding container 7c9aabfb3eb84957b5bd9a27cce31e412a385cb30d6c8fec6f927ce0b36653ee: Status 404 returned error can't find the container with id 7c9aabfb3eb84957b5bd9a27cce31e412a385cb30d6c8fec6f927ce0b36653ee Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.148860 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.149758 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.152112 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6czf7" Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.157504 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.263869 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v48zz\" (UniqueName: \"kubernetes.io/projected/c4c3e540-1be8-41f8-92e8-1371c406c6f2-kube-api-access-v48zz\") pod \"kube-state-metrics-0\" (UID: \"c4c3e540-1be8-41f8-92e8-1371c406c6f2\") " pod="openstack/kube-state-metrics-0" Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.366996 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v48zz\" (UniqueName: \"kubernetes.io/projected/c4c3e540-1be8-41f8-92e8-1371c406c6f2-kube-api-access-v48zz\") pod \"kube-state-metrics-0\" (UID: \"c4c3e540-1be8-41f8-92e8-1371c406c6f2\") " pod="openstack/kube-state-metrics-0" Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.392650 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v48zz\" (UniqueName: \"kubernetes.io/projected/c4c3e540-1be8-41f8-92e8-1371c406c6f2-kube-api-access-v48zz\") pod \"kube-state-metrics-0\" (UID: \"c4c3e540-1be8-41f8-92e8-1371c406c6f2\") " pod="openstack/kube-state-metrics-0" Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.527474 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.541477 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bedefb76-bb9d-46cf-87e9-f8001ff9ce64","Type":"ContainerStarted","Data":"7c9aabfb3eb84957b5bd9a27cce31e412a385cb30d6c8fec6f927ce0b36653ee"} Jan 21 13:27:22 crc kubenswrapper[4959]: I0121 13:27:22.558613 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60b98eb7-0886-4619-afde-c4fb7c5ad7c4","Type":"ContainerStarted","Data":"0e68187aa884b6233d6bed723e8f4b82bb534714e077454eb4afe4eb6ce60851"} Jan 21 13:27:23 crc kubenswrapper[4959]: I0121 13:27:23.592918 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:27:24 crc kubenswrapper[4959]: I0121 13:27:24.649229 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4c3e540-1be8-41f8-92e8-1371c406c6f2","Type":"ContainerStarted","Data":"225cb1bc8068e07aaf11369dfb849935a5cff9d68f639de3cc28bd8f018f6908"} Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.198503 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.199734 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.206013 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.206526 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.206570 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.206697 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.206953 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-xgbf4" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.248990 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.330822 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b481c4d6-1f2e-40e5-a27b-3f840055418a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.330904 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b481c4d6-1f2e-40e5-a27b-3f840055418a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.331003 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.331123 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.331151 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.331167 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b481c4d6-1f2e-40e5-a27b-3f840055418a-config\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.331212 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.331234 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2wxq\" (UniqueName: \"kubernetes.io/projected/b481c4d6-1f2e-40e5-a27b-3f840055418a-kube-api-access-f2wxq\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432653 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432854 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432875 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432893 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b481c4d6-1f2e-40e5-a27b-3f840055418a-config\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432922 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432952 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2wxq\" (UniqueName: \"kubernetes.io/projected/b481c4d6-1f2e-40e5-a27b-3f840055418a-kube-api-access-f2wxq\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.432974 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b481c4d6-1f2e-40e5-a27b-3f840055418a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.433000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b481c4d6-1f2e-40e5-a27b-3f840055418a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.434071 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.440894 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b481c4d6-1f2e-40e5-a27b-3f840055418a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.442460 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b481c4d6-1f2e-40e5-a27b-3f840055418a-config\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.444724 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.444782 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.446683 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b481c4d6-1f2e-40e5-a27b-3f840055418a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.447199 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b481c4d6-1f2e-40e5-a27b-3f840055418a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.474556 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2wxq\" (UniqueName: \"kubernetes.io/projected/b481c4d6-1f2e-40e5-a27b-3f840055418a-kube-api-access-f2wxq\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.511349 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b481c4d6-1f2e-40e5-a27b-3f840055418a\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.538045 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nqz7q"] Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.538963 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.544178 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.544373 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-n7stj" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.544497 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.549236 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.555475 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqz7q"] Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.570748 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-945nd"] Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.572395 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.581110 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-945nd"] Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.640976 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfzjg\" (UniqueName: \"kubernetes.io/projected/77986c63-ba96-4c22-9b51-925c5b43b092-kube-api-access-pfzjg\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.641086 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77986c63-ba96-4c22-9b51-925c5b43b092-scripts\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.641175 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-run\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.641227 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77986c63-ba96-4c22-9b51-925c5b43b092-combined-ca-bundle\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.641257 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/77986c63-ba96-4c22-9b51-925c5b43b092-ovn-controller-tls-certs\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.641336 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-log-ovn\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.641405 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-run-ovn\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743304 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-log-ovn\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743352 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-run-ovn\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743381 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfzjg\" (UniqueName: \"kubernetes.io/projected/77986c63-ba96-4c22-9b51-925c5b43b092-kube-api-access-pfzjg\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743413 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-log\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743430 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8zwz\" (UniqueName: \"kubernetes.io/projected/527befc1-b6a0-41ae-9b03-9057b0dbfe19-kube-api-access-h8zwz\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743460 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527befc1-b6a0-41ae-9b03-9057b0dbfe19-scripts\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743484 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77986c63-ba96-4c22-9b51-925c5b43b092-scripts\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743504 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-lib\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743526 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-run\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743556 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-run\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743576 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-etc-ovs\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743592 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77986c63-ba96-4c22-9b51-925c5b43b092-combined-ca-bundle\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743610 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/77986c63-ba96-4c22-9b51-925c5b43b092-ovn-controller-tls-certs\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.743933 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-log-ovn\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.744173 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-run-ovn\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.744301 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77986c63-ba96-4c22-9b51-925c5b43b092-var-run\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.753268 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77986c63-ba96-4c22-9b51-925c5b43b092-scripts\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.759013 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/77986c63-ba96-4c22-9b51-925c5b43b092-ovn-controller-tls-certs\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.759295 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77986c63-ba96-4c22-9b51-925c5b43b092-combined-ca-bundle\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.768772 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfzjg\" (UniqueName: \"kubernetes.io/projected/77986c63-ba96-4c22-9b51-925c5b43b092-kube-api-access-pfzjg\") pod \"ovn-controller-nqz7q\" (UID: \"77986c63-ba96-4c22-9b51-925c5b43b092\") " pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.864741 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-run\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.864852 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-etc-ovs\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.864985 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-run\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865008 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-log\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8zwz\" (UniqueName: \"kubernetes.io/projected/527befc1-b6a0-41ae-9b03-9057b0dbfe19-kube-api-access-h8zwz\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865048 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527befc1-b6a0-41ae-9b03-9057b0dbfe19-scripts\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865114 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-lib\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865219 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-etc-ovs\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865385 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-lib\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.865518 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/527befc1-b6a0-41ae-9b03-9057b0dbfe19-var-log\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.868153 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/527befc1-b6a0-41ae-9b03-9057b0dbfe19-scripts\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.874713 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqz7q" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.893433 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8zwz\" (UniqueName: \"kubernetes.io/projected/527befc1-b6a0-41ae-9b03-9057b0dbfe19-kube-api-access-h8zwz\") pod \"ovn-controller-ovs-945nd\" (UID: \"527befc1-b6a0-41ae-9b03-9057b0dbfe19\") " pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:27 crc kubenswrapper[4959]: I0121 13:27:27.911271 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.337918 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.345176 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.350565 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wmt4h" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.350609 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.350569 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.350852 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.375345 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.457943 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m9f2\" (UniqueName: \"kubernetes.io/projected/4990579d-d1cf-412f-8246-72396bc8fb1a-kube-api-access-6m9f2\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458028 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458077 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458129 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4990579d-d1cf-412f-8246-72396bc8fb1a-config\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458163 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458177 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4990579d-d1cf-412f-8246-72396bc8fb1a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458575 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.458631 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4990579d-d1cf-412f-8246-72396bc8fb1a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.561842 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.561901 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4990579d-d1cf-412f-8246-72396bc8fb1a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.561991 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m9f2\" (UniqueName: \"kubernetes.io/projected/4990579d-d1cf-412f-8246-72396bc8fb1a-kube-api-access-6m9f2\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562042 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562086 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562140 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4990579d-d1cf-412f-8246-72396bc8fb1a-config\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562168 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562187 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4990579d-d1cf-412f-8246-72396bc8fb1a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562632 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4990579d-d1cf-412f-8246-72396bc8fb1a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.562995 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.565811 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4990579d-d1cf-412f-8246-72396bc8fb1a-config\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.573240 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.573899 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4990579d-d1cf-412f-8246-72396bc8fb1a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.578697 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.591957 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m9f2\" (UniqueName: \"kubernetes.io/projected/4990579d-d1cf-412f-8246-72396bc8fb1a-kube-api-access-6m9f2\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.601201 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4990579d-d1cf-412f-8246-72396bc8fb1a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.610216 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4990579d-d1cf-412f-8246-72396bc8fb1a\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:29 crc kubenswrapper[4959]: I0121 13:27:29.670052 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 13:27:51 crc kubenswrapper[4959]: I0121 13:27:51.379570 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:27:51 crc kubenswrapper[4959]: I0121 13:27:51.380164 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:27:51 crc kubenswrapper[4959]: I0121 13:27:51.380217 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:27:51 crc kubenswrapper[4959]: I0121 13:27:51.380843 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d241f95bdad8e099eb04c705c02b5632d266875f065692682f2eadc1b6776be6"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:27:51 crc kubenswrapper[4959]: I0121 13:27:51.380898 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://d241f95bdad8e099eb04c705c02b5632d266875f065692682f2eadc1b6776be6" gracePeriod=600 Jan 21 13:27:59 crc kubenswrapper[4959]: I0121 13:27:59.028893 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="d241f95bdad8e099eb04c705c02b5632d266875f065692682f2eadc1b6776be6" exitCode=0 Jan 21 13:27:59 crc kubenswrapper[4959]: I0121 13:27:59.028987 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"d241f95bdad8e099eb04c705c02b5632d266875f065692682f2eadc1b6776be6"} Jan 21 13:27:59 crc kubenswrapper[4959]: I0121 13:27:59.029963 4959 scope.go:117] "RemoveContainer" containerID="843c9a535cea21503639885bda8c5e42d1482db615844b1ac00c900cdaba0bca" Jan 21 13:27:59 crc kubenswrapper[4959]: E0121 13:27:59.400798 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 13:27:59 crc kubenswrapper[4959]: E0121 13:27:59.401699 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64rkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(d467040c-ef01-4a64-9d0e-bce50426c248): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:27:59 crc kubenswrapper[4959]: E0121 13:27:59.402953 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="d467040c-ef01-4a64-9d0e-bce50426c248" Jan 21 13:27:59 crc kubenswrapper[4959]: E0121 13:27:59.427166 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 13:27:59 crc kubenswrapper[4959]: E0121 13:27:59.427334 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4q48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(60b98eb7-0886-4619-afde-c4fb7c5ad7c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:27:59 crc kubenswrapper[4959]: E0121 13:27:59.428514 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="60b98eb7-0886-4619-afde-c4fb7c5ad7c4" Jan 21 13:28:00 crc kubenswrapper[4959]: E0121 13:28:00.040246 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="d467040c-ef01-4a64-9d0e-bce50426c248" Jan 21 13:28:00 crc kubenswrapper[4959]: E0121 13:28:00.041490 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="60b98eb7-0886-4619-afde-c4fb7c5ad7c4" Jan 21 13:28:01 crc kubenswrapper[4959]: E0121 13:28:01.066220 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 13:28:01 crc kubenswrapper[4959]: E0121 13:28:01.067050 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mtn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(56f613f3-9dc0-438c-8232-190c680ab312): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:01 crc kubenswrapper[4959]: E0121 13:28:01.068896 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="56f613f3-9dc0-438c-8232-190c680ab312" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.054975 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="56f613f3-9dc0-438c-8232-190c680ab312" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.062325 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.062530 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:ndh586hdh5bfhb8h66dh595h86h6h5bfh598h684h676h6bh579h5d6h68bh584hf7h78h86h576h57ch6chd9h677hf8h686h5fh5b9h5ch678q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46tgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(bedefb76-bb9d-46cf-87e9-f8001ff9ce64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.063776 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="bedefb76-bb9d-46cf-87e9-f8001ff9ce64" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.094710 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.094912 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zdb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3b3273a9-7ce3-48ea-9546-ecb560a2d6b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:02 crc kubenswrapper[4959]: E0121 13:28:02.096181 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" Jan 21 13:28:02 crc kubenswrapper[4959]: I0121 13:28:02.371322 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqz7q"] Jan 21 13:28:03 crc kubenswrapper[4959]: E0121 13:28:03.059915 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="bedefb76-bb9d-46cf-87e9-f8001ff9ce64" Jan 21 13:28:03 crc kubenswrapper[4959]: E0121 13:28:03.061224 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" Jan 21 13:28:06 crc kubenswrapper[4959]: I0121 13:28:06.638729 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-2gp7k" podUID="a8d96f57-3e2b-4959-9205-7ccb1f90abf2" containerName="registry-server" probeResult="failure" output=< Jan 21 13:28:06 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:28:06 crc kubenswrapper[4959]: > Jan 21 13:28:06 crc kubenswrapper[4959]: I0121 13:28:06.638802 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" podUID="8075108b-d9e1-40d4-9e2e-4faa59061778" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.63:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:28:06 crc kubenswrapper[4959]: I0121 13:28:06.638861 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-pp9dq" podUID="8075108b-d9e1-40d4-9e2e-4faa59061778" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.63:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:28:06 crc kubenswrapper[4959]: I0121 13:28:06.652606 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-2gp7k" podUID="a8d96f57-3e2b-4959-9205-7ccb1f90abf2" containerName="registry-server" probeResult="failure" output=< Jan 21 13:28:06 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:28:06 crc kubenswrapper[4959]: > Jan 21 13:28:08 crc kubenswrapper[4959]: I0121 13:28:08.673757 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqz7q" event={"ID":"77986c63-ba96-4c22-9b51-925c5b43b092","Type":"ContainerStarted","Data":"2f0f4fa4599dd1c7cb838f6c83e6a0053a436667508c4d136e6ea3501f83ba96"} Jan 21 13:28:09 crc kubenswrapper[4959]: I0121 13:28:09.037921 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-945nd"] Jan 21 13:28:09 crc kubenswrapper[4959]: I0121 13:28:09.218471 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 13:28:09 crc kubenswrapper[4959]: I0121 13:28:09.536666 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 13:28:10 crc kubenswrapper[4959]: W0121 13:28:10.711050 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4990579d_d1cf_412f_8246_72396bc8fb1a.slice/crio-2cc885ba8f9bcf94706f8daa48844aa0195ba7656a28000d65873c4db9a8692a WatchSource:0}: Error finding container 2cc885ba8f9bcf94706f8daa48844aa0195ba7656a28000d65873c4db9a8692a: Status 404 returned error can't find the container with id 2cc885ba8f9bcf94706f8daa48844aa0195ba7656a28000d65873c4db9a8692a Jan 21 13:28:10 crc kubenswrapper[4959]: W0121 13:28:10.714037 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod527befc1_b6a0_41ae_9b03_9057b0dbfe19.slice/crio-8c36a457379e079f2528bbd9f4e5762746c8aa1e6d9b05a83e975df7efd1c722 WatchSource:0}: Error finding container 8c36a457379e079f2528bbd9f4e5762746c8aa1e6d9b05a83e975df7efd1c722: Status 404 returned error can't find the container with id 8c36a457379e079f2528bbd9f4e5762746c8aa1e6d9b05a83e975df7efd1c722 Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.043530 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.043732 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dk6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bzxmw_openstack(c7fa8803-84a4-4696-9e23-91f4998b99e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.045032 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" podUID="c7fa8803-84a4-4696-9e23-91f4998b99e7" Jan 21 13:28:11 crc kubenswrapper[4959]: W0121 13:28:11.325197 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb481c4d6_1f2e_40e5_a27b_3f840055418a.slice/crio-44da16b62261ccac9355c215b419e40151ca71d0f5ad4880a78d14c8f840cce8 WatchSource:0}: Error finding container 44da16b62261ccac9355c215b419e40151ca71d0f5ad4880a78d14c8f840cce8: Status 404 returned error can't find the container with id 44da16b62261ccac9355c215b419e40151ca71d0f5ad4880a78d14c8f840cce8 Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.346141 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.346595 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7v58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-kb7kb_openstack(05ffc9b9-e97c-46f9-aa06-a933251e3c25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.348587 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" podUID="05ffc9b9-e97c-46f9-aa06-a933251e3c25" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.428186 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.428391 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhd2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-dxqds_openstack(ab3e855e-ef21-4e3e-a048-91aaef598fde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.429804 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" podUID="ab3e855e-ef21-4e3e-a048-91aaef598fde" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.544307 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.544516 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-859fg_openstack(0ec5da5a-1451-45bb-867a-0edd91c811b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.545785 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" Jan 21 13:28:11 crc kubenswrapper[4959]: I0121 13:28:11.697080 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4990579d-d1cf-412f-8246-72396bc8fb1a","Type":"ContainerStarted","Data":"2cc885ba8f9bcf94706f8daa48844aa0195ba7656a28000d65873c4db9a8692a"} Jan 21 13:28:11 crc kubenswrapper[4959]: I0121 13:28:11.699778 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b481c4d6-1f2e-40e5-a27b-3f840055418a","Type":"ContainerStarted","Data":"44da16b62261ccac9355c215b419e40151ca71d0f5ad4880a78d14c8f840cce8"} Jan 21 13:28:11 crc kubenswrapper[4959]: I0121 13:28:11.703176 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-945nd" event={"ID":"527befc1-b6a0-41ae-9b03-9057b0dbfe19","Type":"ContainerStarted","Data":"8c36a457379e079f2528bbd9f4e5762746c8aa1e6d9b05a83e975df7efd1c722"} Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.705658 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" Jan 21 13:28:11 crc kubenswrapper[4959]: E0121 13:28:11.705770 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" podUID="ab3e855e-ef21-4e3e-a048-91aaef598fde" Jan 21 13:28:12 crc kubenswrapper[4959]: E0121 13:28:12.653698 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 13:28:12 crc kubenswrapper[4959]: E0121 13:28:12.654063 4959 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 13:28:12 crc kubenswrapper[4959]: E0121 13:28:12.654290 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v48zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(c4c3e540-1be8-41f8-92e8-1371c406c6f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:28:12 crc kubenswrapper[4959]: E0121 13:28:12.655399 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.661198 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.673731 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.711486 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" event={"ID":"05ffc9b9-e97c-46f9-aa06-a933251e3c25","Type":"ContainerDied","Data":"1f08fd18d79d2572c0b762ea1557375636c086e4c65fc158aee012724497c0a5"} Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.711906 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-kb7kb" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.714426 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.714566 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bzxmw" event={"ID":"c7fa8803-84a4-4696-9e23-91f4998b99e7","Type":"ContainerDied","Data":"c102c0f23ae0b99a1f0c41c1c0178136b99e044ffdbb49191869e3b278aa9bd2"} Jan 21 13:28:12 crc kubenswrapper[4959]: E0121 13:28:12.716251 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.741554 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-dns-svc\") pod \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.741600 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7v58\" (UniqueName: \"kubernetes.io/projected/05ffc9b9-e97c-46f9-aa06-a933251e3c25-kube-api-access-l7v58\") pod \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.741635 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dk6z\" (UniqueName: \"kubernetes.io/projected/c7fa8803-84a4-4696-9e23-91f4998b99e7-kube-api-access-7dk6z\") pod \"c7fa8803-84a4-4696-9e23-91f4998b99e7\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.741661 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fa8803-84a4-4696-9e23-91f4998b99e7-config\") pod \"c7fa8803-84a4-4696-9e23-91f4998b99e7\" (UID: \"c7fa8803-84a4-4696-9e23-91f4998b99e7\") " Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.741720 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-config\") pod \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\" (UID: \"05ffc9b9-e97c-46f9-aa06-a933251e3c25\") " Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.742201 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05ffc9b9-e97c-46f9-aa06-a933251e3c25" (UID: "05ffc9b9-e97c-46f9-aa06-a933251e3c25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.742539 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-config" (OuterVolumeSpecName: "config") pod "05ffc9b9-e97c-46f9-aa06-a933251e3c25" (UID: "05ffc9b9-e97c-46f9-aa06-a933251e3c25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.743040 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fa8803-84a4-4696-9e23-91f4998b99e7-config" (OuterVolumeSpecName: "config") pod "c7fa8803-84a4-4696-9e23-91f4998b99e7" (UID: "c7fa8803-84a4-4696-9e23-91f4998b99e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.748385 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ffc9b9-e97c-46f9-aa06-a933251e3c25-kube-api-access-l7v58" (OuterVolumeSpecName: "kube-api-access-l7v58") pod "05ffc9b9-e97c-46f9-aa06-a933251e3c25" (UID: "05ffc9b9-e97c-46f9-aa06-a933251e3c25"). InnerVolumeSpecName "kube-api-access-l7v58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.749808 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fa8803-84a4-4696-9e23-91f4998b99e7-kube-api-access-7dk6z" (OuterVolumeSpecName: "kube-api-access-7dk6z") pod "c7fa8803-84a4-4696-9e23-91f4998b99e7" (UID: "c7fa8803-84a4-4696-9e23-91f4998b99e7"). InnerVolumeSpecName "kube-api-access-7dk6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.844051 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dk6z\" (UniqueName: \"kubernetes.io/projected/c7fa8803-84a4-4696-9e23-91f4998b99e7-kube-api-access-7dk6z\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.844114 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fa8803-84a4-4696-9e23-91f4998b99e7-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.844130 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.844140 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05ffc9b9-e97c-46f9-aa06-a933251e3c25-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:12 crc kubenswrapper[4959]: I0121 13:28:12.844153 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7v58\" (UniqueName: \"kubernetes.io/projected/05ffc9b9-e97c-46f9-aa06-a933251e3c25-kube-api-access-l7v58\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.080065 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kb7kb"] Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.089773 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-kb7kb"] Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.119377 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bzxmw"] Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.267061 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bzxmw"] Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.295835 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ffc9b9-e97c-46f9-aa06-a933251e3c25" path="/var/lib/kubelet/pods/05ffc9b9-e97c-46f9-aa06-a933251e3c25/volumes" Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.296536 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fa8803-84a4-4696-9e23-91f4998b99e7" path="/var/lib/kubelet/pods/c7fa8803-84a4-4696-9e23-91f4998b99e7/volumes" Jan 21 13:28:13 crc kubenswrapper[4959]: I0121 13:28:13.723511 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"8a84446f54fbcdd9e945dd5ad114c0f8a1adc39825215bb3644cea7a3988b06e"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.754422 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d467040c-ef01-4a64-9d0e-bce50426c248","Type":"ContainerStarted","Data":"883746fa7e3b3e25759201f0a1f599262fab1533c858b4cc2a3dc80603668017"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.757856 4959 generic.go:334] "Generic (PLEG): container finished" podID="527befc1-b6a0-41ae-9b03-9057b0dbfe19" containerID="f3d51553182076ae02b2ad335b856c71623adc513eff8092dc75bbeffe3e7a13" exitCode=0 Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.758314 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-945nd" event={"ID":"527befc1-b6a0-41ae-9b03-9057b0dbfe19","Type":"ContainerDied","Data":"f3d51553182076ae02b2ad335b856c71623adc513eff8092dc75bbeffe3e7a13"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.761200 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bedefb76-bb9d-46cf-87e9-f8001ff9ce64","Type":"ContainerStarted","Data":"31a4d69d6b8e4a61232099c67a5f91ff413525224a0379c3169ff29f90221f95"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.761450 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.763015 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqz7q" event={"ID":"77986c63-ba96-4c22-9b51-925c5b43b092","Type":"ContainerStarted","Data":"5254d90a79f92baeb1e3d8d628df7e7f9a3247b8c27585f5536c94f4145e8f9d"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.765981 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4990579d-d1cf-412f-8246-72396bc8fb1a","Type":"ContainerStarted","Data":"92b4710af3b3795f0ec10abcf5a515051f481f79a1529391bdfffc21850ce4e7"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.768653 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b481c4d6-1f2e-40e5-a27b-3f840055418a","Type":"ContainerStarted","Data":"7924be361445bb82b9a1d99c4a3ddeca32927ce08fe940f99705f7f4f217ebd4"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.770249 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60b98eb7-0886-4619-afde-c4fb7c5ad7c4","Type":"ContainerStarted","Data":"e4c83ac8e17c77b378bf79fb35048c96e4615f23219753eccf471cd53e29cec0"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.775256 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56f613f3-9dc0-438c-8232-190c680ab312","Type":"ContainerStarted","Data":"648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648"} Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.867698 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.064405668 podStartE2EDuration="57.867685247s" podCreationTimestamp="2026-01-21 13:27:20 +0000 UTC" firstStartedPulling="2026-01-21 13:27:21.706366229 +0000 UTC m=+1102.669396772" lastFinishedPulling="2026-01-21 13:28:16.509645808 +0000 UTC m=+1157.472676351" observedRunningTime="2026-01-21 13:28:17.867283726 +0000 UTC m=+1158.830314269" watchObservedRunningTime="2026-01-21 13:28:17.867685247 +0000 UTC m=+1158.830715790" Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.875384 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nqz7q" Jan 21 13:28:17 crc kubenswrapper[4959]: I0121 13:28:17.892223 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nqz7q" podStartSLOduration=42.779019367 podStartE2EDuration="50.892196151s" podCreationTimestamp="2026-01-21 13:27:27 +0000 UTC" firstStartedPulling="2026-01-21 13:28:08.404403438 +0000 UTC m=+1149.367433981" lastFinishedPulling="2026-01-21 13:28:16.517580222 +0000 UTC m=+1157.480610765" observedRunningTime="2026-01-21 13:28:17.885293644 +0000 UTC m=+1158.848324177" watchObservedRunningTime="2026-01-21 13:28:17.892196151 +0000 UTC m=+1158.855226694" Jan 21 13:28:18 crc kubenswrapper[4959]: I0121 13:28:18.784084 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-945nd" event={"ID":"527befc1-b6a0-41ae-9b03-9057b0dbfe19","Type":"ContainerStarted","Data":"e9965501dbc72ec7a3a8f525c9c5c3249add6a6b3ffe6476333de76d087a218f"} Jan 21 13:28:19 crc kubenswrapper[4959]: I0121 13:28:19.793371 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2","Type":"ContainerStarted","Data":"489878be51100bbc8edd0fe92d1f85d34e280d023ad9591d04ed79f7501bbf46"} Jan 21 13:28:19 crc kubenswrapper[4959]: I0121 13:28:19.797375 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-945nd" event={"ID":"527befc1-b6a0-41ae-9b03-9057b0dbfe19","Type":"ContainerStarted","Data":"309fab2120319d19b54908ba795ff44c3cbcec6ca1a76427cdcdcd205505704b"} Jan 21 13:28:19 crc kubenswrapper[4959]: I0121 13:28:19.797729 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:28:20 crc kubenswrapper[4959]: I0121 13:28:20.135265 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-945nd" podStartSLOduration=47.421017568 podStartE2EDuration="53.135247964s" podCreationTimestamp="2026-01-21 13:27:27 +0000 UTC" firstStartedPulling="2026-01-21 13:28:10.718996641 +0000 UTC m=+1151.682027184" lastFinishedPulling="2026-01-21 13:28:16.433227037 +0000 UTC m=+1157.396257580" observedRunningTime="2026-01-21 13:28:20.134822023 +0000 UTC m=+1161.097852566" watchObservedRunningTime="2026-01-21 13:28:20.135247964 +0000 UTC m=+1161.098278497" Jan 21 13:28:20 crc kubenswrapper[4959]: I0121 13:28:20.805254 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:28:21 crc kubenswrapper[4959]: I0121 13:28:21.812783 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4990579d-d1cf-412f-8246-72396bc8fb1a","Type":"ContainerStarted","Data":"ce1cca7036041cdd8bf974431a9ae57f24120e9cb6f30edd3df8f947087a2bad"} Jan 21 13:28:21 crc kubenswrapper[4959]: I0121 13:28:21.814642 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b481c4d6-1f2e-40e5-a27b-3f840055418a","Type":"ContainerStarted","Data":"86894f3be923d56ad04284c2c1bc15387b8603b8fb5d044d77c2a071476bb64c"} Jan 21 13:28:21 crc kubenswrapper[4959]: I0121 13:28:21.831932 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=43.363672686 podStartE2EDuration="53.831912497s" podCreationTimestamp="2026-01-21 13:27:28 +0000 UTC" firstStartedPulling="2026-01-21 13:28:10.713276776 +0000 UTC m=+1151.676307319" lastFinishedPulling="2026-01-21 13:28:21.181516587 +0000 UTC m=+1162.144547130" observedRunningTime="2026-01-21 13:28:21.828682779 +0000 UTC m=+1162.791713332" watchObservedRunningTime="2026-01-21 13:28:21.831912497 +0000 UTC m=+1162.794943030" Jan 21 13:28:21 crc kubenswrapper[4959]: I0121 13:28:21.854671 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=45.985752787 podStartE2EDuration="55.854652923s" podCreationTimestamp="2026-01-21 13:27:26 +0000 UTC" firstStartedPulling="2026-01-21 13:28:11.328637085 +0000 UTC m=+1152.291667628" lastFinishedPulling="2026-01-21 13:28:21.197537221 +0000 UTC m=+1162.160567764" observedRunningTime="2026-01-21 13:28:21.851135117 +0000 UTC m=+1162.814165670" watchObservedRunningTime="2026-01-21 13:28:21.854652923 +0000 UTC m=+1162.817683486" Jan 21 13:28:22 crc kubenswrapper[4959]: I0121 13:28:22.550025 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 13:28:22 crc kubenswrapper[4959]: I0121 13:28:22.822484 4959 generic.go:334] "Generic (PLEG): container finished" podID="d467040c-ef01-4a64-9d0e-bce50426c248" containerID="883746fa7e3b3e25759201f0a1f599262fab1533c858b4cc2a3dc80603668017" exitCode=0 Jan 21 13:28:22 crc kubenswrapper[4959]: I0121 13:28:22.822594 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d467040c-ef01-4a64-9d0e-bce50426c248","Type":"ContainerDied","Data":"883746fa7e3b3e25759201f0a1f599262fab1533c858b4cc2a3dc80603668017"} Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.670549 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.709739 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.832586 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d467040c-ef01-4a64-9d0e-bce50426c248","Type":"ContainerStarted","Data":"91276356571f11cf560fb0c75f9a2224f2a0a7a17a1bb62b1f400e612686e251"} Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.838980 4959 generic.go:334] "Generic (PLEG): container finished" podID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerID="ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb" exitCode=0 Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.839055 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" event={"ID":"0ec5da5a-1451-45bb-867a-0edd91c811b2","Type":"ContainerDied","Data":"ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb"} Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.842351 4959 generic.go:334] "Generic (PLEG): container finished" podID="60b98eb7-0886-4619-afde-c4fb7c5ad7c4" containerID="e4c83ac8e17c77b378bf79fb35048c96e4615f23219753eccf471cd53e29cec0" exitCode=0 Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.843138 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60b98eb7-0886-4619-afde-c4fb7c5ad7c4","Type":"ContainerDied","Data":"e4c83ac8e17c77b378bf79fb35048c96e4615f23219753eccf471cd53e29cec0"} Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.843794 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.857518 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.392838594 podStartE2EDuration="1m6.857498258s" podCreationTimestamp="2026-01-21 13:27:17 +0000 UTC" firstStartedPulling="2026-01-21 13:27:20.035379743 +0000 UTC m=+1100.998410276" lastFinishedPulling="2026-01-21 13:28:16.500039397 +0000 UTC m=+1157.463069940" observedRunningTime="2026-01-21 13:28:23.852830932 +0000 UTC m=+1164.815861495" watchObservedRunningTime="2026-01-21 13:28:23.857498258 +0000 UTC m=+1164.820528801" Jan 21 13:28:23 crc kubenswrapper[4959]: I0121 13:28:23.893553 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.162142 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dxqds"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.210167 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2gr8n"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.212320 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.217596 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.221826 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2gr8n"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.237873 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dvtct"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.238950 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.240755 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.252892 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dvtct"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355483 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-ovs-rundir\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355539 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355595 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355676 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-config\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355727 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355807 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-ovn-rundir\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355834 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-combined-ca-bundle\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355864 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cs9\" (UniqueName: \"kubernetes.io/projected/361b792c-5aa3-4aae-867e-ed16ee1c6452-kube-api-access-r7cs9\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355894 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-config\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.355940 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lh7m\" (UniqueName: \"kubernetes.io/projected/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-kube-api-access-5lh7m\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457616 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-ovn-rundir\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457703 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-combined-ca-bundle\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457731 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cs9\" (UniqueName: \"kubernetes.io/projected/361b792c-5aa3-4aae-867e-ed16ee1c6452-kube-api-access-r7cs9\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457799 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-config\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457847 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lh7m\" (UniqueName: \"kubernetes.io/projected/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-kube-api-access-5lh7m\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457883 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-ovs-rundir\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457903 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457944 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457988 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-config\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.457987 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-ovn-rundir\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.458014 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.459443 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.459577 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-config\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.459571 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-config\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.459635 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-ovs-rundir\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.460139 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.469625 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-combined-ca-bundle\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.487733 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cs9\" (UniqueName: \"kubernetes.io/projected/361b792c-5aa3-4aae-867e-ed16ee1c6452-kube-api-access-r7cs9\") pod \"dnsmasq-dns-6bc7876d45-2gr8n\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.490661 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.501334 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lh7m\" (UniqueName: \"kubernetes.io/projected/f68f9349-c3ff-4aef-93fc-69cf9e4c2541-kube-api-access-5lh7m\") pod \"ovn-controller-metrics-dvtct\" (UID: \"f68f9349-c3ff-4aef-93fc-69cf9e4c2541\") " pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.509008 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-859fg"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.533167 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.548975 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-hxqm5"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.550403 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.551197 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.551462 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.553885 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.567900 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hxqm5"] Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.610588 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dvtct" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.627815 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.664926 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhd2b\" (UniqueName: \"kubernetes.io/projected/ab3e855e-ef21-4e3e-a048-91aaef598fde-kube-api-access-lhd2b\") pod \"ab3e855e-ef21-4e3e-a048-91aaef598fde\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665151 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-config\") pod \"ab3e855e-ef21-4e3e-a048-91aaef598fde\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665230 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-dns-svc\") pod \"ab3e855e-ef21-4e3e-a048-91aaef598fde\" (UID: \"ab3e855e-ef21-4e3e-a048-91aaef598fde\") " Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665569 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665603 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-config\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665667 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665659 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab3e855e-ef21-4e3e-a048-91aaef598fde" (UID: "ab3e855e-ef21-4e3e-a048-91aaef598fde"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665753 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7m2m\" (UniqueName: \"kubernetes.io/projected/58cda1cd-195e-486d-9d1f-5eee6d6caf21-kube-api-access-s7m2m\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665813 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-dns-svc\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.665874 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.666012 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-config" (OuterVolumeSpecName: "config") pod "ab3e855e-ef21-4e3e-a048-91aaef598fde" (UID: "ab3e855e-ef21-4e3e-a048-91aaef598fde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.674354 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3e855e-ef21-4e3e-a048-91aaef598fde-kube-api-access-lhd2b" (OuterVolumeSpecName: "kube-api-access-lhd2b") pod "ab3e855e-ef21-4e3e-a048-91aaef598fde" (UID: "ab3e855e-ef21-4e3e-a048-91aaef598fde"). InnerVolumeSpecName "kube-api-access-lhd2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768001 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-dns-svc\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768083 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768135 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-config\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768222 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768475 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7m2m\" (UniqueName: \"kubernetes.io/projected/58cda1cd-195e-486d-9d1f-5eee6d6caf21-kube-api-access-s7m2m\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768530 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhd2b\" (UniqueName: \"kubernetes.io/projected/ab3e855e-ef21-4e3e-a048-91aaef598fde-kube-api-access-lhd2b\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.768542 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab3e855e-ef21-4e3e-a048-91aaef598fde-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.769692 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-dns-svc\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.769951 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.770007 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.770919 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-config\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.790727 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7m2m\" (UniqueName: \"kubernetes.io/projected/58cda1cd-195e-486d-9d1f-5eee6d6caf21-kube-api-access-s7m2m\") pod \"dnsmasq-dns-8554648995-hxqm5\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.863164 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" event={"ID":"0ec5da5a-1451-45bb-867a-0edd91c811b2","Type":"ContainerStarted","Data":"88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0"} Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.863252 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerName="dnsmasq-dns" containerID="cri-o://88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0" gracePeriod=10 Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.863464 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.865606 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" event={"ID":"ab3e855e-ef21-4e3e-a048-91aaef598fde","Type":"ContainerDied","Data":"6f456e2bb33deee26aaddafd535fcf44644bb00a29bf1a467057b523583fd36d"} Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.865719 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dxqds" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.877067 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60b98eb7-0886-4619-afde-c4fb7c5ad7c4","Type":"ContainerStarted","Data":"4d001efeb1048e7f0987f4aa03bb2bb1c25ccfa6f41926da680f6544737ce35a"} Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.906155 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.907443 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" podStartSLOduration=3.4556958079999998 podStartE2EDuration="1m8.90740706s" podCreationTimestamp="2026-01-21 13:27:16 +0000 UTC" firstStartedPulling="2026-01-21 13:27:17.259866844 +0000 UTC m=+1098.222897387" lastFinishedPulling="2026-01-21 13:28:22.711578096 +0000 UTC m=+1163.674608639" observedRunningTime="2026-01-21 13:28:24.888587751 +0000 UTC m=+1165.851618304" watchObservedRunningTime="2026-01-21 13:28:24.90740706 +0000 UTC m=+1165.870437603" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.917568 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 13:28:24 crc kubenswrapper[4959]: I0121 13:28:24.931878 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.102935359 podStartE2EDuration="1m5.931859313s" podCreationTimestamp="2026-01-21 13:27:19 +0000 UTC" firstStartedPulling="2026-01-21 13:27:21.690659713 +0000 UTC m=+1102.653690266" lastFinishedPulling="2026-01-21 13:28:16.519583677 +0000 UTC m=+1157.482614220" observedRunningTime="2026-01-21 13:28:24.91881605 +0000 UTC m=+1165.881846613" watchObservedRunningTime="2026-01-21 13:28:24.931859313 +0000 UTC m=+1165.894889856" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.071350 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2gr8n"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.104495 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dxqds"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.110989 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dxqds"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.169644 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dvtct"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.259446 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.261408 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.265611 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.265836 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.265922 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.265986 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-m6vkt" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.288676 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.325640 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3e855e-ef21-4e3e-a048-91aaef598fde" path="/var/lib/kubelet/pods/ab3e855e-ef21-4e3e-a048-91aaef598fde/volumes" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.411763 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.411826 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e2c3d3c-262c-478b-a773-10213c66032e-scripts\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.411878 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e2c3d3c-262c-478b-a773-10213c66032e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.411901 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsm2\" (UniqueName: \"kubernetes.io/projected/8e2c3d3c-262c-478b-a773-10213c66032e-kube-api-access-gwsm2\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.411941 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.411991 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2c3d3c-262c-478b-a773-10213c66032e-config\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.412039 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.504156 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hxqm5"] Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514038 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2c3d3c-262c-478b-a773-10213c66032e-config\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514141 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514204 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514228 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e2c3d3c-262c-478b-a773-10213c66032e-scripts\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514278 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e2c3d3c-262c-478b-a773-10213c66032e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514298 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwsm2\" (UniqueName: \"kubernetes.io/projected/8e2c3d3c-262c-478b-a773-10213c66032e-kube-api-access-gwsm2\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.514353 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.515037 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2c3d3c-262c-478b-a773-10213c66032e-config\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.515152 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e2c3d3c-262c-478b-a773-10213c66032e-scripts\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.515276 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e2c3d3c-262c-478b-a773-10213c66032e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.519724 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.519868 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.521178 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e2c3d3c-262c-478b-a773-10213c66032e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.532349 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwsm2\" (UniqueName: \"kubernetes.io/projected/8e2c3d3c-262c-478b-a773-10213c66032e-kube-api-access-gwsm2\") pod \"ovn-northd-0\" (UID: \"8e2c3d3c-262c-478b-a773-10213c66032e\") " pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: W0121 13:28:25.561965 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58cda1cd_195e_486d_9d1f_5eee6d6caf21.slice/crio-ad56fdd9cd17c7eb92af360df23ba7773b1cac01f5943e7ebfbdd34776b8c72d WatchSource:0}: Error finding container ad56fdd9cd17c7eb92af360df23ba7773b1cac01f5943e7ebfbdd34776b8c72d: Status 404 returned error can't find the container with id ad56fdd9cd17c7eb92af360df23ba7773b1cac01f5943e7ebfbdd34776b8c72d Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.572837 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.638199 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.717339 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtl52\" (UniqueName: \"kubernetes.io/projected/0ec5da5a-1451-45bb-867a-0edd91c811b2-kube-api-access-qtl52\") pod \"0ec5da5a-1451-45bb-867a-0edd91c811b2\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.717608 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-config\") pod \"0ec5da5a-1451-45bb-867a-0edd91c811b2\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.717807 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-dns-svc\") pod \"0ec5da5a-1451-45bb-867a-0edd91c811b2\" (UID: \"0ec5da5a-1451-45bb-867a-0edd91c811b2\") " Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.723311 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.725311 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec5da5a-1451-45bb-867a-0edd91c811b2-kube-api-access-qtl52" (OuterVolumeSpecName: "kube-api-access-qtl52") pod "0ec5da5a-1451-45bb-867a-0edd91c811b2" (UID: "0ec5da5a-1451-45bb-867a-0edd91c811b2"). InnerVolumeSpecName "kube-api-access-qtl52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.763280 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ec5da5a-1451-45bb-867a-0edd91c811b2" (UID: "0ec5da5a-1451-45bb-867a-0edd91c811b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.806185 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-config" (OuterVolumeSpecName: "config") pod "0ec5da5a-1451-45bb-867a-0edd91c811b2" (UID: "0ec5da5a-1451-45bb-867a-0edd91c811b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.820524 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.820562 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec5da5a-1451-45bb-867a-0edd91c811b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.820575 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtl52\" (UniqueName: \"kubernetes.io/projected/0ec5da5a-1451-45bb-867a-0edd91c811b2-kube-api-access-qtl52\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.883662 4959 generic.go:334] "Generic (PLEG): container finished" podID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerID="c4464062169907ad2b0b23d11d45c1eee09507f9abcc0faabf713f2297dbb8d9" exitCode=0 Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.883713 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" event={"ID":"361b792c-5aa3-4aae-867e-ed16ee1c6452","Type":"ContainerDied","Data":"c4464062169907ad2b0b23d11d45c1eee09507f9abcc0faabf713f2297dbb8d9"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.883736 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" event={"ID":"361b792c-5aa3-4aae-867e-ed16ee1c6452","Type":"ContainerStarted","Data":"376c9321a6c34819b965aa3775001254878275662a98a0609b0b4de49f1c26ae"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.886285 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dvtct" event={"ID":"f68f9349-c3ff-4aef-93fc-69cf9e4c2541","Type":"ContainerStarted","Data":"d6a48bb58cd314448ca4ea11ff3ab5368031b51b6b23ce0359e48df6a1624d66"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.886356 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dvtct" event={"ID":"f68f9349-c3ff-4aef-93fc-69cf9e4c2541","Type":"ContainerStarted","Data":"6fe870f10a16955e318a57534d95e89f2a5b128b1a80a53a301a83a2276d61fd"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.889616 4959 generic.go:334] "Generic (PLEG): container finished" podID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerID="88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0" exitCode=0 Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.889705 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" event={"ID":"0ec5da5a-1451-45bb-867a-0edd91c811b2","Type":"ContainerDied","Data":"88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.889743 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" event={"ID":"0ec5da5a-1451-45bb-867a-0edd91c811b2","Type":"ContainerDied","Data":"bcf43bcbf3f1d6ea08591533640426bcb5ea6140510201ebd898ac8dd251b635"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.889764 4959 scope.go:117] "RemoveContainer" containerID="88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.889964 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-859fg" Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.899750 4959 generic.go:334] "Generic (PLEG): container finished" podID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerID="19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145" exitCode=0 Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.907935 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hxqm5" event={"ID":"58cda1cd-195e-486d-9d1f-5eee6d6caf21","Type":"ContainerDied","Data":"19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.907979 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hxqm5" event={"ID":"58cda1cd-195e-486d-9d1f-5eee6d6caf21","Type":"ContainerStarted","Data":"ad56fdd9cd17c7eb92af360df23ba7773b1cac01f5943e7ebfbdd34776b8c72d"} Jan 21 13:28:25 crc kubenswrapper[4959]: I0121 13:28:25.950525 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dvtct" podStartSLOduration=1.950502787 podStartE2EDuration="1.950502787s" podCreationTimestamp="2026-01-21 13:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:25.946899199 +0000 UTC m=+1166.909929752" watchObservedRunningTime="2026-01-21 13:28:25.950502787 +0000 UTC m=+1166.913533330" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.051067 4959 scope.go:117] "RemoveContainer" containerID="ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.070287 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-859fg"] Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.091972 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-859fg"] Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.111312 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 13:28:26 crc kubenswrapper[4959]: W0121 13:28:26.120408 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e2c3d3c_262c_478b_a773_10213c66032e.slice/crio-f2e0e706607737048643a46276bbd911be27d77f4332e85558e6cc30b9e4f299 WatchSource:0}: Error finding container f2e0e706607737048643a46276bbd911be27d77f4332e85558e6cc30b9e4f299: Status 404 returned error can't find the container with id f2e0e706607737048643a46276bbd911be27d77f4332e85558e6cc30b9e4f299 Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.139376 4959 scope.go:117] "RemoveContainer" containerID="88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0" Jan 21 13:28:26 crc kubenswrapper[4959]: E0121 13:28:26.141322 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0\": container with ID starting with 88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0 not found: ID does not exist" containerID="88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.141384 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0"} err="failed to get container status \"88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0\": rpc error: code = NotFound desc = could not find container \"88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0\": container with ID starting with 88f610b0460bf5848c7a5ee9cc974537805521f2fd34a5d1b2945e74163daea0 not found: ID does not exist" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.141412 4959 scope.go:117] "RemoveContainer" containerID="ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb" Jan 21 13:28:26 crc kubenswrapper[4959]: E0121 13:28:26.143817 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb\": container with ID starting with ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb not found: ID does not exist" containerID="ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.143877 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb"} err="failed to get container status \"ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb\": rpc error: code = NotFound desc = could not find container \"ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb\": container with ID starting with ae991c626e534acf594e1591e6dbce6c95be92eb057da08541619f9ed7cd6ddb not found: ID does not exist" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.910205 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hxqm5" event={"ID":"58cda1cd-195e-486d-9d1f-5eee6d6caf21","Type":"ContainerStarted","Data":"1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b"} Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.910289 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.914295 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" event={"ID":"361b792c-5aa3-4aae-867e-ed16ee1c6452","Type":"ContainerStarted","Data":"f3637a488e2ee33deef539ab35e440102bcc19553db9f944bed13fee09fefa83"} Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.914396 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.915559 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8e2c3d3c-262c-478b-a773-10213c66032e","Type":"ContainerStarted","Data":"f2e0e706607737048643a46276bbd911be27d77f4332e85558e6cc30b9e4f299"} Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.934347 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-hxqm5" podStartSLOduration=2.934328639 podStartE2EDuration="2.934328639s" podCreationTimestamp="2026-01-21 13:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:26.930787023 +0000 UTC m=+1167.893817576" watchObservedRunningTime="2026-01-21 13:28:26.934328639 +0000 UTC m=+1167.897359182" Jan 21 13:28:26 crc kubenswrapper[4959]: I0121 13:28:26.953401 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" podStartSLOduration=2.953378355 podStartE2EDuration="2.953378355s" podCreationTimestamp="2026-01-21 13:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:26.948713858 +0000 UTC m=+1167.911744401" watchObservedRunningTime="2026-01-21 13:28:26.953378355 +0000 UTC m=+1167.916408898" Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.299148 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" path="/var/lib/kubelet/pods/0ec5da5a-1451-45bb-867a-0edd91c811b2/volumes" Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.929514 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4c3e540-1be8-41f8-92e8-1371c406c6f2","Type":"ContainerStarted","Data":"483dd939c23f3cc8282b4883a07c0249ade7cd4e1ff75800984bce52ebaac94c"} Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.930041 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.933155 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8e2c3d3c-262c-478b-a773-10213c66032e","Type":"ContainerStarted","Data":"38c66af8a8c4497c6bc635fbeda8f2b6c8567b41c3faed4a1bdf30a7d20fc050"} Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.933208 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8e2c3d3c-262c-478b-a773-10213c66032e","Type":"ContainerStarted","Data":"55a8b9e0a09fb50f5410f3c3da00d67657985be0ce3195b6b921a208fb3b5db8"} Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.953418 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.825812434 podStartE2EDuration="1m5.953401285s" podCreationTimestamp="2026-01-21 13:27:22 +0000 UTC" firstStartedPulling="2026-01-21 13:27:23.723735629 +0000 UTC m=+1104.686766172" lastFinishedPulling="2026-01-21 13:28:26.85132448 +0000 UTC m=+1167.814355023" observedRunningTime="2026-01-21 13:28:27.944398331 +0000 UTC m=+1168.907428874" watchObservedRunningTime="2026-01-21 13:28:27.953401285 +0000 UTC m=+1168.916431828" Jan 21 13:28:27 crc kubenswrapper[4959]: I0121 13:28:27.980153 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.607487144 podStartE2EDuration="2.980128609s" podCreationTimestamp="2026-01-21 13:28:25 +0000 UTC" firstStartedPulling="2026-01-21 13:28:26.138646304 +0000 UTC m=+1167.101676847" lastFinishedPulling="2026-01-21 13:28:27.511287769 +0000 UTC m=+1168.474318312" observedRunningTime="2026-01-21 13:28:27.965794071 +0000 UTC m=+1168.928824614" watchObservedRunningTime="2026-01-21 13:28:27.980128609 +0000 UTC m=+1168.943159152" Jan 21 13:28:28 crc kubenswrapper[4959]: I0121 13:28:28.939721 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 13:28:29 crc kubenswrapper[4959]: I0121 13:28:29.304025 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 13:28:29 crc kubenswrapper[4959]: I0121 13:28:29.304230 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 13:28:29 crc kubenswrapper[4959]: I0121 13:28:29.379362 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.008378 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.319410 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-93be-account-create-update-nzrqj"] Jan 21 13:28:30 crc kubenswrapper[4959]: E0121 13:28:30.319712 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerName="dnsmasq-dns" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.319723 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerName="dnsmasq-dns" Jan 21 13:28:30 crc kubenswrapper[4959]: E0121 13:28:30.319758 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerName="init" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.319765 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerName="init" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.319911 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec5da5a-1451-45bb-867a-0edd91c811b2" containerName="dnsmasq-dns" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.320428 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.322880 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.332245 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-93be-account-create-update-nzrqj"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.406986 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gmt72"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.408138 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.414132 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7d70b3-f5c3-4257-b166-8883de61c0b3-operator-scripts\") pod \"keystone-93be-account-create-update-nzrqj\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.414325 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k66kp\" (UniqueName: \"kubernetes.io/projected/0d7d70b3-f5c3-4257-b166-8883de61c0b3-kube-api-access-k66kp\") pod \"keystone-93be-account-create-update-nzrqj\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.422711 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gmt72"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.515535 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7d70b3-f5c3-4257-b166-8883de61c0b3-operator-scripts\") pod \"keystone-93be-account-create-update-nzrqj\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.515634 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95967790-40ad-4e28-a454-277919176550-operator-scripts\") pod \"keystone-db-create-gmt72\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.515712 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k66kp\" (UniqueName: \"kubernetes.io/projected/0d7d70b3-f5c3-4257-b166-8883de61c0b3-kube-api-access-k66kp\") pod \"keystone-93be-account-create-update-nzrqj\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.516005 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6hg\" (UniqueName: \"kubernetes.io/projected/95967790-40ad-4e28-a454-277919176550-kube-api-access-cd6hg\") pod \"keystone-db-create-gmt72\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.517060 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7d70b3-f5c3-4257-b166-8883de61c0b3-operator-scripts\") pod \"keystone-93be-account-create-update-nzrqj\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.534354 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k66kp\" (UniqueName: \"kubernetes.io/projected/0d7d70b3-f5c3-4257-b166-8883de61c0b3-kube-api-access-k66kp\") pod \"keystone-93be-account-create-update-nzrqj\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.586121 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-l5zlf"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.588527 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.598014 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l5zlf"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.617647 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95967790-40ad-4e28-a454-277919176550-operator-scripts\") pod \"keystone-db-create-gmt72\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.617782 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6hg\" (UniqueName: \"kubernetes.io/projected/95967790-40ad-4e28-a454-277919176550-kube-api-access-cd6hg\") pod \"keystone-db-create-gmt72\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.618724 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95967790-40ad-4e28-a454-277919176550-operator-scripts\") pod \"keystone-db-create-gmt72\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.637566 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6hg\" (UniqueName: \"kubernetes.io/projected/95967790-40ad-4e28-a454-277919176550-kube-api-access-cd6hg\") pod \"keystone-db-create-gmt72\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.640634 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.692316 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6593-account-create-update-7p2kt"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.693690 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.696752 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.710219 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6593-account-create-update-7p2kt"] Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.726903 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.727556 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-operator-scripts\") pod \"placement-db-create-l5zlf\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.727746 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hfvt\" (UniqueName: \"kubernetes.io/projected/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-kube-api-access-9hfvt\") pod \"placement-db-create-l5zlf\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.829151 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdfh\" (UniqueName: \"kubernetes.io/projected/ab805b64-b362-482a-9421-0e75b98afbdc-kube-api-access-9hdfh\") pod \"placement-6593-account-create-update-7p2kt\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.829520 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-operator-scripts\") pod \"placement-db-create-l5zlf\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.829579 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab805b64-b362-482a-9421-0e75b98afbdc-operator-scripts\") pod \"placement-6593-account-create-update-7p2kt\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.829628 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hfvt\" (UniqueName: \"kubernetes.io/projected/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-kube-api-access-9hfvt\") pod \"placement-db-create-l5zlf\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.831036 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-operator-scripts\") pod \"placement-db-create-l5zlf\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.850403 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hfvt\" (UniqueName: \"kubernetes.io/projected/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-kube-api-access-9hfvt\") pod \"placement-db-create-l5zlf\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.928193 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.931028 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdfh\" (UniqueName: \"kubernetes.io/projected/ab805b64-b362-482a-9421-0e75b98afbdc-kube-api-access-9hdfh\") pod \"placement-6593-account-create-update-7p2kt\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.931207 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab805b64-b362-482a-9421-0e75b98afbdc-operator-scripts\") pod \"placement-6593-account-create-update-7p2kt\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.932007 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab805b64-b362-482a-9421-0e75b98afbdc-operator-scripts\") pod \"placement-6593-account-create-update-7p2kt\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:30 crc kubenswrapper[4959]: I0121 13:28:30.950990 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdfh\" (UniqueName: \"kubernetes.io/projected/ab805b64-b362-482a-9421-0e75b98afbdc-kube-api-access-9hdfh\") pod \"placement-6593-account-create-update-7p2kt\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.094147 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-93be-account-create-update-nzrqj"] Jan 21 13:28:31 crc kubenswrapper[4959]: W0121 13:28:31.098338 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d7d70b3_f5c3_4257_b166_8883de61c0b3.slice/crio-cd8b819a3ad04df1f306c4f8e117d5549249fe9b6d934f4b1dc4af76cb087f21 WatchSource:0}: Error finding container cd8b819a3ad04df1f306c4f8e117d5549249fe9b6d934f4b1dc4af76cb087f21: Status 404 returned error can't find the container with id cd8b819a3ad04df1f306c4f8e117d5549249fe9b6d934f4b1dc4af76cb087f21 Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.113793 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.113854 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.122489 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.197758 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.218481 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gmt72"] Jan 21 13:28:31 crc kubenswrapper[4959]: W0121 13:28:31.229962 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95967790_40ad_4e28_a454_277919176550.slice/crio-a292b4e7057f5e96cd47dd738cbd97584a73ca19284b9a1f7729972a96276461 WatchSource:0}: Error finding container a292b4e7057f5e96cd47dd738cbd97584a73ca19284b9a1f7729972a96276461: Status 404 returned error can't find the container with id a292b4e7057f5e96cd47dd738cbd97584a73ca19284b9a1f7729972a96276461 Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.394030 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l5zlf"] Jan 21 13:28:31 crc kubenswrapper[4959]: W0121 13:28:31.400933 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbda7c4c_90f4_49c2_a9d3_e0e839c7a5df.slice/crio-285234aae63d3897ebc01031c512dd4065b487811b2ba2c6fd12796c6e0b5757 WatchSource:0}: Error finding container 285234aae63d3897ebc01031c512dd4065b487811b2ba2c6fd12796c6e0b5757: Status 404 returned error can't find the container with id 285234aae63d3897ebc01031c512dd4065b487811b2ba2c6fd12796c6e0b5757 Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.622250 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6593-account-create-update-7p2kt"] Jan 21 13:28:31 crc kubenswrapper[4959]: W0121 13:28:31.659691 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab805b64_b362_482a_9421_0e75b98afbdc.slice/crio-7d55832873f96aa1b494c34e858c69ef7667f7f126d5e8c33ee01bed3954e4ca WatchSource:0}: Error finding container 7d55832873f96aa1b494c34e858c69ef7667f7f126d5e8c33ee01bed3954e4ca: Status 404 returned error can't find the container with id 7d55832873f96aa1b494c34e858c69ef7667f7f126d5e8c33ee01bed3954e4ca Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.970146 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5zlf" event={"ID":"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df","Type":"ContainerStarted","Data":"2c2d34de2043118e4459e2486e63ddb7757ce47fc2bca2db2d6f0773da8ce26c"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.970192 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5zlf" event={"ID":"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df","Type":"ContainerStarted","Data":"285234aae63d3897ebc01031c512dd4065b487811b2ba2c6fd12796c6e0b5757"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.973128 4959 generic.go:334] "Generic (PLEG): container finished" podID="0d7d70b3-f5c3-4257-b166-8883de61c0b3" containerID="acc29c92913bded6c77c99e55a13ae2a4af53c86c6bf8621bac8a373989ca624" exitCode=0 Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.973177 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93be-account-create-update-nzrqj" event={"ID":"0d7d70b3-f5c3-4257-b166-8883de61c0b3","Type":"ContainerDied","Data":"acc29c92913bded6c77c99e55a13ae2a4af53c86c6bf8621bac8a373989ca624"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.973198 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93be-account-create-update-nzrqj" event={"ID":"0d7d70b3-f5c3-4257-b166-8883de61c0b3","Type":"ContainerStarted","Data":"cd8b819a3ad04df1f306c4f8e117d5549249fe9b6d934f4b1dc4af76cb087f21"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.974639 4959 generic.go:334] "Generic (PLEG): container finished" podID="95967790-40ad-4e28-a454-277919176550" containerID="ed458762e363305ae08229ec5d4c586f606198fa1d5d39ced525483442aef9c9" exitCode=0 Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.974686 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gmt72" event={"ID":"95967790-40ad-4e28-a454-277919176550","Type":"ContainerDied","Data":"ed458762e363305ae08229ec5d4c586f606198fa1d5d39ced525483442aef9c9"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.974700 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gmt72" event={"ID":"95967790-40ad-4e28-a454-277919176550","Type":"ContainerStarted","Data":"a292b4e7057f5e96cd47dd738cbd97584a73ca19284b9a1f7729972a96276461"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.977448 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6593-account-create-update-7p2kt" event={"ID":"ab805b64-b362-482a-9421-0e75b98afbdc","Type":"ContainerStarted","Data":"98e430745bf400b4c3865e365fd8364e3db8be4b07411ceb7a6f2095fdd5927c"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.977474 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6593-account-create-update-7p2kt" event={"ID":"ab805b64-b362-482a-9421-0e75b98afbdc","Type":"ContainerStarted","Data":"7d55832873f96aa1b494c34e858c69ef7667f7f126d5e8c33ee01bed3954e4ca"} Jan 21 13:28:31 crc kubenswrapper[4959]: I0121 13:28:31.995291 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-l5zlf" podStartSLOduration=1.995272352 podStartE2EDuration="1.995272352s" podCreationTimestamp="2026-01-21 13:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:31.987365417 +0000 UTC m=+1172.950395960" watchObservedRunningTime="2026-01-21 13:28:31.995272352 +0000 UTC m=+1172.958302895" Jan 21 13:28:32 crc kubenswrapper[4959]: I0121 13:28:32.023293 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6593-account-create-update-7p2kt" podStartSLOduration=2.02327602 podStartE2EDuration="2.02327602s" podCreationTimestamp="2026-01-21 13:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:32.017830443 +0000 UTC m=+1172.980861026" watchObservedRunningTime="2026-01-21 13:28:32.02327602 +0000 UTC m=+1172.986306563" Jan 21 13:28:32 crc kubenswrapper[4959]: I0121 13:28:32.046791 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 13:28:32 crc kubenswrapper[4959]: I0121 13:28:32.531888 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 13:28:32 crc kubenswrapper[4959]: I0121 13:28:32.992534 4959 generic.go:334] "Generic (PLEG): container finished" podID="ab805b64-b362-482a-9421-0e75b98afbdc" containerID="98e430745bf400b4c3865e365fd8364e3db8be4b07411ceb7a6f2095fdd5927c" exitCode=0 Jan 21 13:28:32 crc kubenswrapper[4959]: I0121 13:28:32.992734 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6593-account-create-update-7p2kt" event={"ID":"ab805b64-b362-482a-9421-0e75b98afbdc","Type":"ContainerDied","Data":"98e430745bf400b4c3865e365fd8364e3db8be4b07411ceb7a6f2095fdd5927c"} Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.000475 4959 generic.go:334] "Generic (PLEG): container finished" podID="dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" containerID="2c2d34de2043118e4459e2486e63ddb7757ce47fc2bca2db2d6f0773da8ce26c" exitCode=0 Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.000617 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5zlf" event={"ID":"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df","Type":"ContainerDied","Data":"2c2d34de2043118e4459e2486e63ddb7757ce47fc2bca2db2d6f0773da8ce26c"} Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.437367 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.443285 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.477256 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd6hg\" (UniqueName: \"kubernetes.io/projected/95967790-40ad-4e28-a454-277919176550-kube-api-access-cd6hg\") pod \"95967790-40ad-4e28-a454-277919176550\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.477377 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7d70b3-f5c3-4257-b166-8883de61c0b3-operator-scripts\") pod \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.477414 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k66kp\" (UniqueName: \"kubernetes.io/projected/0d7d70b3-f5c3-4257-b166-8883de61c0b3-kube-api-access-k66kp\") pod \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\" (UID: \"0d7d70b3-f5c3-4257-b166-8883de61c0b3\") " Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.477531 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95967790-40ad-4e28-a454-277919176550-operator-scripts\") pod \"95967790-40ad-4e28-a454-277919176550\" (UID: \"95967790-40ad-4e28-a454-277919176550\") " Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.478296 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d7d70b3-f5c3-4257-b166-8883de61c0b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d7d70b3-f5c3-4257-b166-8883de61c0b3" (UID: "0d7d70b3-f5c3-4257-b166-8883de61c0b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.478384 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95967790-40ad-4e28-a454-277919176550-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95967790-40ad-4e28-a454-277919176550" (UID: "95967790-40ad-4e28-a454-277919176550"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.483124 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7d70b3-f5c3-4257-b166-8883de61c0b3-kube-api-access-k66kp" (OuterVolumeSpecName: "kube-api-access-k66kp") pod "0d7d70b3-f5c3-4257-b166-8883de61c0b3" (UID: "0d7d70b3-f5c3-4257-b166-8883de61c0b3"). InnerVolumeSpecName "kube-api-access-k66kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.484318 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95967790-40ad-4e28-a454-277919176550-kube-api-access-cd6hg" (OuterVolumeSpecName: "kube-api-access-cd6hg") pod "95967790-40ad-4e28-a454-277919176550" (UID: "95967790-40ad-4e28-a454-277919176550"). InnerVolumeSpecName "kube-api-access-cd6hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.579028 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95967790-40ad-4e28-a454-277919176550-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.579126 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd6hg\" (UniqueName: \"kubernetes.io/projected/95967790-40ad-4e28-a454-277919176550-kube-api-access-cd6hg\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.579148 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d7d70b3-f5c3-4257-b166-8883de61c0b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:33 crc kubenswrapper[4959]: I0121 13:28:33.579160 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k66kp\" (UniqueName: \"kubernetes.io/projected/0d7d70b3-f5c3-4257-b166-8883de61c0b3-kube-api-access-k66kp\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.011692 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gmt72" event={"ID":"95967790-40ad-4e28-a454-277919176550","Type":"ContainerDied","Data":"a292b4e7057f5e96cd47dd738cbd97584a73ca19284b9a1f7729972a96276461"} Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.011737 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a292b4e7057f5e96cd47dd738cbd97584a73ca19284b9a1f7729972a96276461" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.011801 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gmt72" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.016692 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-93be-account-create-update-nzrqj" event={"ID":"0d7d70b3-f5c3-4257-b166-8883de61c0b3","Type":"ContainerDied","Data":"cd8b819a3ad04df1f306c4f8e117d5549249fe9b6d934f4b1dc4af76cb087f21"} Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.016922 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8b819a3ad04df1f306c4f8e117d5549249fe9b6d934f4b1dc4af76cb087f21" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.016846 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-93be-account-create-update-nzrqj" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.280909 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.391085 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.393474 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab805b64-b362-482a-9421-0e75b98afbdc-operator-scripts\") pod \"ab805b64-b362-482a-9421-0e75b98afbdc\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.393731 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdfh\" (UniqueName: \"kubernetes.io/projected/ab805b64-b362-482a-9421-0e75b98afbdc-kube-api-access-9hdfh\") pod \"ab805b64-b362-482a-9421-0e75b98afbdc\" (UID: \"ab805b64-b362-482a-9421-0e75b98afbdc\") " Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.394159 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab805b64-b362-482a-9421-0e75b98afbdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab805b64-b362-482a-9421-0e75b98afbdc" (UID: "ab805b64-b362-482a-9421-0e75b98afbdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.394479 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab805b64-b362-482a-9421-0e75b98afbdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.401442 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab805b64-b362-482a-9421-0e75b98afbdc-kube-api-access-9hdfh" (OuterVolumeSpecName: "kube-api-access-9hdfh") pod "ab805b64-b362-482a-9421-0e75b98afbdc" (UID: "ab805b64-b362-482a-9421-0e75b98afbdc"). InnerVolumeSpecName "kube-api-access-9hdfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.495593 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hfvt\" (UniqueName: \"kubernetes.io/projected/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-kube-api-access-9hfvt\") pod \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.495805 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-operator-scripts\") pod \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\" (UID: \"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df\") " Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.496138 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdfh\" (UniqueName: \"kubernetes.io/projected/ab805b64-b362-482a-9421-0e75b98afbdc-kube-api-access-9hdfh\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.496692 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" (UID: "dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.498653 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-kube-api-access-9hfvt" (OuterVolumeSpecName: "kube-api-access-9hfvt") pod "dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" (UID: "dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df"). InnerVolumeSpecName "kube-api-access-9hfvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.535079 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.598584 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.602577 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hfvt\" (UniqueName: \"kubernetes.io/projected/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df-kube-api-access-9hfvt\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.909404 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:28:34 crc kubenswrapper[4959]: I0121 13:28:34.985807 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2gr8n"] Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.032954 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6593-account-create-update-7p2kt" event={"ID":"ab805b64-b362-482a-9421-0e75b98afbdc","Type":"ContainerDied","Data":"7d55832873f96aa1b494c34e858c69ef7667f7f126d5e8c33ee01bed3954e4ca"} Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.033003 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d55832873f96aa1b494c34e858c69ef7667f7f126d5e8c33ee01bed3954e4ca" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.033063 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6593-account-create-update-7p2kt" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.040751 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="dnsmasq-dns" containerID="cri-o://f3637a488e2ee33deef539ab35e440102bcc19553db9f944bed13fee09fefa83" gracePeriod=10 Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.041658 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l5zlf" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.042720 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l5zlf" event={"ID":"dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df","Type":"ContainerDied","Data":"285234aae63d3897ebc01031c512dd4065b487811b2ba2c6fd12796c6e0b5757"} Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.042768 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285234aae63d3897ebc01031c512dd4065b487811b2ba2c6fd12796c6e0b5757" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.956777 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2kxzl"] Jan 21 13:28:35 crc kubenswrapper[4959]: E0121 13:28:35.957524 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab805b64-b362-482a-9421-0e75b98afbdc" containerName="mariadb-account-create-update" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957542 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab805b64-b362-482a-9421-0e75b98afbdc" containerName="mariadb-account-create-update" Jan 21 13:28:35 crc kubenswrapper[4959]: E0121 13:28:35.957570 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d70b3-f5c3-4257-b166-8883de61c0b3" containerName="mariadb-account-create-update" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957578 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d70b3-f5c3-4257-b166-8883de61c0b3" containerName="mariadb-account-create-update" Jan 21 13:28:35 crc kubenswrapper[4959]: E0121 13:28:35.957596 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" containerName="mariadb-database-create" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957605 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" containerName="mariadb-database-create" Jan 21 13:28:35 crc kubenswrapper[4959]: E0121 13:28:35.957625 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95967790-40ad-4e28-a454-277919176550" containerName="mariadb-database-create" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957631 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="95967790-40ad-4e28-a454-277919176550" containerName="mariadb-database-create" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957799 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d7d70b3-f5c3-4257-b166-8883de61c0b3" containerName="mariadb-account-create-update" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957809 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="95967790-40ad-4e28-a454-277919176550" containerName="mariadb-database-create" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957824 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" containerName="mariadb-database-create" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.957839 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab805b64-b362-482a-9421-0e75b98afbdc" containerName="mariadb-account-create-update" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.958454 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:35 crc kubenswrapper[4959]: I0121 13:28:35.967810 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2kxzl"] Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.025846 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42032608-71eb-41ff-87a3-4a06780169ac-operator-scripts\") pod \"glance-db-create-2kxzl\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.025919 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5t8\" (UniqueName: \"kubernetes.io/projected/42032608-71eb-41ff-87a3-4a06780169ac-kube-api-access-dg5t8\") pod \"glance-db-create-2kxzl\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.054894 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-878b-account-create-update-mmmb5"] Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.056276 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.058543 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.066573 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-878b-account-create-update-mmmb5"] Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.127945 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b320e7fa-a18b-46ff-be7b-1753fb60b768-operator-scripts\") pod \"glance-878b-account-create-update-mmmb5\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.128029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42032608-71eb-41ff-87a3-4a06780169ac-operator-scripts\") pod \"glance-db-create-2kxzl\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.128110 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbp2r\" (UniqueName: \"kubernetes.io/projected/b320e7fa-a18b-46ff-be7b-1753fb60b768-kube-api-access-vbp2r\") pod \"glance-878b-account-create-update-mmmb5\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.128147 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg5t8\" (UniqueName: \"kubernetes.io/projected/42032608-71eb-41ff-87a3-4a06780169ac-kube-api-access-dg5t8\") pod \"glance-db-create-2kxzl\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.128788 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42032608-71eb-41ff-87a3-4a06780169ac-operator-scripts\") pod \"glance-db-create-2kxzl\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.148155 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg5t8\" (UniqueName: \"kubernetes.io/projected/42032608-71eb-41ff-87a3-4a06780169ac-kube-api-access-dg5t8\") pod \"glance-db-create-2kxzl\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.229450 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b320e7fa-a18b-46ff-be7b-1753fb60b768-operator-scripts\") pod \"glance-878b-account-create-update-mmmb5\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.230175 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbp2r\" (UniqueName: \"kubernetes.io/projected/b320e7fa-a18b-46ff-be7b-1753fb60b768-kube-api-access-vbp2r\") pod \"glance-878b-account-create-update-mmmb5\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.230393 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b320e7fa-a18b-46ff-be7b-1753fb60b768-operator-scripts\") pod \"glance-878b-account-create-update-mmmb5\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.248001 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbp2r\" (UniqueName: \"kubernetes.io/projected/b320e7fa-a18b-46ff-be7b-1753fb60b768-kube-api-access-vbp2r\") pod \"glance-878b-account-create-update-mmmb5\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.276610 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.372357 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.523062 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2kxzl"] Jan 21 13:28:36 crc kubenswrapper[4959]: I0121 13:28:36.842757 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-878b-account-create-update-mmmb5"] Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.054166 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-878b-account-create-update-mmmb5" event={"ID":"b320e7fa-a18b-46ff-be7b-1753fb60b768","Type":"ContainerStarted","Data":"ab757597aa068bcb9926130484a8d90ca4668e0a2e82c26ce4784c0ce77b0725"} Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.055316 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kxzl" event={"ID":"42032608-71eb-41ff-87a3-4a06780169ac","Type":"ContainerStarted","Data":"b11ab4a919cd396f09e5e438d24e1feb127c2e91a09664a4bd448bb20b95639b"} Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.583444 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xchjg"] Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.584614 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.603611 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.606479 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xchjg"] Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.652025 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759d6\" (UniqueName: \"kubernetes.io/projected/9bc94b98-4e88-4d64-9b27-7413d97810df-kube-api-access-759d6\") pod \"root-account-create-update-xchjg\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.652105 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc94b98-4e88-4d64-9b27-7413d97810df-operator-scripts\") pod \"root-account-create-update-xchjg\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.753788 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc94b98-4e88-4d64-9b27-7413d97810df-operator-scripts\") pod \"root-account-create-update-xchjg\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.754056 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-759d6\" (UniqueName: \"kubernetes.io/projected/9bc94b98-4e88-4d64-9b27-7413d97810df-kube-api-access-759d6\") pod \"root-account-create-update-xchjg\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.754511 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc94b98-4e88-4d64-9b27-7413d97810df-operator-scripts\") pod \"root-account-create-update-xchjg\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.773671 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-759d6\" (UniqueName: \"kubernetes.io/projected/9bc94b98-4e88-4d64-9b27-7413d97810df-kube-api-access-759d6\") pod \"root-account-create-update-xchjg\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:37 crc kubenswrapper[4959]: I0121 13:28:37.903186 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:39 crc kubenswrapper[4959]: I0121 13:28:39.536365 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 21 13:28:39 crc kubenswrapper[4959]: I0121 13:28:39.606681 4959 generic.go:334] "Generic (PLEG): container finished" podID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerID="f3637a488e2ee33deef539ab35e440102bcc19553db9f944bed13fee09fefa83" exitCode=0 Jan 21 13:28:39 crc kubenswrapper[4959]: I0121 13:28:39.606741 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" event={"ID":"361b792c-5aa3-4aae-867e-ed16ee1c6452","Type":"ContainerDied","Data":"f3637a488e2ee33deef539ab35e440102bcc19553db9f944bed13fee09fefa83"} Jan 21 13:28:39 crc kubenswrapper[4959]: I0121 13:28:39.901678 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xchjg"] Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.272063 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.300684 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-dns-svc\") pod \"361b792c-5aa3-4aae-867e-ed16ee1c6452\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.300722 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-ovsdbserver-sb\") pod \"361b792c-5aa3-4aae-867e-ed16ee1c6452\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.300821 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7cs9\" (UniqueName: \"kubernetes.io/projected/361b792c-5aa3-4aae-867e-ed16ee1c6452-kube-api-access-r7cs9\") pod \"361b792c-5aa3-4aae-867e-ed16ee1c6452\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.300918 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-config\") pod \"361b792c-5aa3-4aae-867e-ed16ee1c6452\" (UID: \"361b792c-5aa3-4aae-867e-ed16ee1c6452\") " Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.310410 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/361b792c-5aa3-4aae-867e-ed16ee1c6452-kube-api-access-r7cs9" (OuterVolumeSpecName: "kube-api-access-r7cs9") pod "361b792c-5aa3-4aae-867e-ed16ee1c6452" (UID: "361b792c-5aa3-4aae-867e-ed16ee1c6452"). InnerVolumeSpecName "kube-api-access-r7cs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.345858 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "361b792c-5aa3-4aae-867e-ed16ee1c6452" (UID: "361b792c-5aa3-4aae-867e-ed16ee1c6452"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.346370 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-config" (OuterVolumeSpecName: "config") pod "361b792c-5aa3-4aae-867e-ed16ee1c6452" (UID: "361b792c-5aa3-4aae-867e-ed16ee1c6452"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.349599 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "361b792c-5aa3-4aae-867e-ed16ee1c6452" (UID: "361b792c-5aa3-4aae-867e-ed16ee1c6452"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.403355 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.403403 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.403416 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7cs9\" (UniqueName: \"kubernetes.io/projected/361b792c-5aa3-4aae-867e-ed16ee1c6452-kube-api-access-r7cs9\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.403426 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361b792c-5aa3-4aae-867e-ed16ee1c6452-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.619260 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" event={"ID":"361b792c-5aa3-4aae-867e-ed16ee1c6452","Type":"ContainerDied","Data":"376c9321a6c34819b965aa3775001254878275662a98a0609b0b4de49f1c26ae"} Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.619311 4959 scope.go:117] "RemoveContainer" containerID="f3637a488e2ee33deef539ab35e440102bcc19553db9f944bed13fee09fefa83" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.619418 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2gr8n" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.626292 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kxzl" event={"ID":"42032608-71eb-41ff-87a3-4a06780169ac","Type":"ContainerStarted","Data":"387cce0b61f4a11b22b5181f73ded596c601b1a626f12e0cf65a89488fd6404f"} Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.628354 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xchjg" event={"ID":"9bc94b98-4e88-4d64-9b27-7413d97810df","Type":"ContainerStarted","Data":"379b01d08672afb6c86d32a399b872dcfcf4d2b56941a3fa58f02455853fa024"} Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.629375 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-878b-account-create-update-mmmb5" event={"ID":"b320e7fa-a18b-46ff-be7b-1753fb60b768","Type":"ContainerStarted","Data":"c7e3c67f846e43f396ae424ec508ed3b0511eadefa537abefab06029f9a7ff52"} Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.645024 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2kxzl" podStartSLOduration=5.64500684 podStartE2EDuration="5.64500684s" podCreationTimestamp="2026-01-21 13:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:40.64461638 +0000 UTC m=+1181.607646933" watchObservedRunningTime="2026-01-21 13:28:40.64500684 +0000 UTC m=+1181.608037383" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.648257 4959 scope.go:117] "RemoveContainer" containerID="c4464062169907ad2b0b23d11d45c1eee09507f9abcc0faabf713f2297dbb8d9" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.666401 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-878b-account-create-update-mmmb5" podStartSLOduration=4.6663754189999995 podStartE2EDuration="4.666375419s" podCreationTimestamp="2026-01-21 13:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:40.656398509 +0000 UTC m=+1181.619429052" watchObservedRunningTime="2026-01-21 13:28:40.666375419 +0000 UTC m=+1181.629405962" Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.690036 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2gr8n"] Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.696648 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2gr8n"] Jan 21 13:28:40 crc kubenswrapper[4959]: I0121 13:28:40.718832 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 13:28:41 crc kubenswrapper[4959]: I0121 13:28:41.299702 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" path="/var/lib/kubelet/pods/361b792c-5aa3-4aae-867e-ed16ee1c6452/volumes" Jan 21 13:28:41 crc kubenswrapper[4959]: I0121 13:28:41.638175 4959 generic.go:334] "Generic (PLEG): container finished" podID="42032608-71eb-41ff-87a3-4a06780169ac" containerID="387cce0b61f4a11b22b5181f73ded596c601b1a626f12e0cf65a89488fd6404f" exitCode=0 Jan 21 13:28:41 crc kubenswrapper[4959]: I0121 13:28:41.638256 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kxzl" event={"ID":"42032608-71eb-41ff-87a3-4a06780169ac","Type":"ContainerDied","Data":"387cce0b61f4a11b22b5181f73ded596c601b1a626f12e0cf65a89488fd6404f"} Jan 21 13:28:41 crc kubenswrapper[4959]: I0121 13:28:41.641005 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xchjg" event={"ID":"9bc94b98-4e88-4d64-9b27-7413d97810df","Type":"ContainerStarted","Data":"f80f480d3f2f809562c976c8560d795be5ea388ff94cb11c39a2f69db9d1a11f"} Jan 21 13:28:41 crc kubenswrapper[4959]: I0121 13:28:41.671616 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xchjg" podStartSLOduration=4.671601551 podStartE2EDuration="4.671601551s" podCreationTimestamp="2026-01-21 13:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:41.665118245 +0000 UTC m=+1182.628148808" watchObservedRunningTime="2026-01-21 13:28:41.671601551 +0000 UTC m=+1182.634632094" Jan 21 13:28:42 crc kubenswrapper[4959]: I0121 13:28:42.650434 4959 generic.go:334] "Generic (PLEG): container finished" podID="b320e7fa-a18b-46ff-be7b-1753fb60b768" containerID="c7e3c67f846e43f396ae424ec508ed3b0511eadefa537abefab06029f9a7ff52" exitCode=0 Jan 21 13:28:42 crc kubenswrapper[4959]: I0121 13:28:42.650533 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-878b-account-create-update-mmmb5" event={"ID":"b320e7fa-a18b-46ff-be7b-1753fb60b768","Type":"ContainerDied","Data":"c7e3c67f846e43f396ae424ec508ed3b0511eadefa537abefab06029f9a7ff52"} Jan 21 13:28:42 crc kubenswrapper[4959]: I0121 13:28:42.652959 4959 generic.go:334] "Generic (PLEG): container finished" podID="9bc94b98-4e88-4d64-9b27-7413d97810df" containerID="f80f480d3f2f809562c976c8560d795be5ea388ff94cb11c39a2f69db9d1a11f" exitCode=0 Jan 21 13:28:42 crc kubenswrapper[4959]: I0121 13:28:42.652979 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xchjg" event={"ID":"9bc94b98-4e88-4d64-9b27-7413d97810df","Type":"ContainerDied","Data":"f80f480d3f2f809562c976c8560d795be5ea388ff94cb11c39a2f69db9d1a11f"} Jan 21 13:28:42 crc kubenswrapper[4959]: I0121 13:28:42.988334 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.046072 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42032608-71eb-41ff-87a3-4a06780169ac-operator-scripts\") pod \"42032608-71eb-41ff-87a3-4a06780169ac\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.046210 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg5t8\" (UniqueName: \"kubernetes.io/projected/42032608-71eb-41ff-87a3-4a06780169ac-kube-api-access-dg5t8\") pod \"42032608-71eb-41ff-87a3-4a06780169ac\" (UID: \"42032608-71eb-41ff-87a3-4a06780169ac\") " Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.047043 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42032608-71eb-41ff-87a3-4a06780169ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42032608-71eb-41ff-87a3-4a06780169ac" (UID: "42032608-71eb-41ff-87a3-4a06780169ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.068879 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42032608-71eb-41ff-87a3-4a06780169ac-kube-api-access-dg5t8" (OuterVolumeSpecName: "kube-api-access-dg5t8") pod "42032608-71eb-41ff-87a3-4a06780169ac" (UID: "42032608-71eb-41ff-87a3-4a06780169ac"). InnerVolumeSpecName "kube-api-access-dg5t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.147988 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42032608-71eb-41ff-87a3-4a06780169ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.148028 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg5t8\" (UniqueName: \"kubernetes.io/projected/42032608-71eb-41ff-87a3-4a06780169ac-kube-api-access-dg5t8\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.660912 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2kxzl" event={"ID":"42032608-71eb-41ff-87a3-4a06780169ac","Type":"ContainerDied","Data":"b11ab4a919cd396f09e5e438d24e1feb127c2e91a09664a4bd448bb20b95639b"} Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.660954 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11ab4a919cd396f09e5e438d24e1feb127c2e91a09664a4bd448bb20b95639b" Jan 21 13:28:43 crc kubenswrapper[4959]: I0121 13:28:43.661144 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2kxzl" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.065756 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.066986 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.162475 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbp2r\" (UniqueName: \"kubernetes.io/projected/b320e7fa-a18b-46ff-be7b-1753fb60b768-kube-api-access-vbp2r\") pod \"b320e7fa-a18b-46ff-be7b-1753fb60b768\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.163694 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc94b98-4e88-4d64-9b27-7413d97810df-operator-scripts\") pod \"9bc94b98-4e88-4d64-9b27-7413d97810df\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.163767 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-759d6\" (UniqueName: \"kubernetes.io/projected/9bc94b98-4e88-4d64-9b27-7413d97810df-kube-api-access-759d6\") pod \"9bc94b98-4e88-4d64-9b27-7413d97810df\" (UID: \"9bc94b98-4e88-4d64-9b27-7413d97810df\") " Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.164165 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b320e7fa-a18b-46ff-be7b-1753fb60b768-operator-scripts\") pod \"b320e7fa-a18b-46ff-be7b-1753fb60b768\" (UID: \"b320e7fa-a18b-46ff-be7b-1753fb60b768\") " Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.164160 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc94b98-4e88-4d64-9b27-7413d97810df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bc94b98-4e88-4d64-9b27-7413d97810df" (UID: "9bc94b98-4e88-4d64-9b27-7413d97810df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.164649 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc94b98-4e88-4d64-9b27-7413d97810df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.164645 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b320e7fa-a18b-46ff-be7b-1753fb60b768-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b320e7fa-a18b-46ff-be7b-1753fb60b768" (UID: "b320e7fa-a18b-46ff-be7b-1753fb60b768"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.167623 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b320e7fa-a18b-46ff-be7b-1753fb60b768-kube-api-access-vbp2r" (OuterVolumeSpecName: "kube-api-access-vbp2r") pod "b320e7fa-a18b-46ff-be7b-1753fb60b768" (UID: "b320e7fa-a18b-46ff-be7b-1753fb60b768"). InnerVolumeSpecName "kube-api-access-vbp2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.167682 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc94b98-4e88-4d64-9b27-7413d97810df-kube-api-access-759d6" (OuterVolumeSpecName: "kube-api-access-759d6") pod "9bc94b98-4e88-4d64-9b27-7413d97810df" (UID: "9bc94b98-4e88-4d64-9b27-7413d97810df"). InnerVolumeSpecName "kube-api-access-759d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.267497 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b320e7fa-a18b-46ff-be7b-1753fb60b768-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.267540 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbp2r\" (UniqueName: \"kubernetes.io/projected/b320e7fa-a18b-46ff-be7b-1753fb60b768-kube-api-access-vbp2r\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.267559 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-759d6\" (UniqueName: \"kubernetes.io/projected/9bc94b98-4e88-4d64-9b27-7413d97810df-kube-api-access-759d6\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.673399 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-878b-account-create-update-mmmb5" event={"ID":"b320e7fa-a18b-46ff-be7b-1753fb60b768","Type":"ContainerDied","Data":"ab757597aa068bcb9926130484a8d90ca4668e0a2e82c26ce4784c0ce77b0725"} Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.673442 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab757597aa068bcb9926130484a8d90ca4668e0a2e82c26ce4784c0ce77b0725" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.673515 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-878b-account-create-update-mmmb5" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.676655 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xchjg" event={"ID":"9bc94b98-4e88-4d64-9b27-7413d97810df","Type":"ContainerDied","Data":"379b01d08672afb6c86d32a399b872dcfcf4d2b56941a3fa58f02455853fa024"} Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.676679 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="379b01d08672afb6c86d32a399b872dcfcf4d2b56941a3fa58f02455853fa024" Jan 21 13:28:44 crc kubenswrapper[4959]: I0121 13:28:44.676720 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xchjg" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.222579 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8244c"] Jan 21 13:28:46 crc kubenswrapper[4959]: E0121 13:28:46.223236 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42032608-71eb-41ff-87a3-4a06780169ac" containerName="mariadb-database-create" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223252 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="42032608-71eb-41ff-87a3-4a06780169ac" containerName="mariadb-database-create" Jan 21 13:28:46 crc kubenswrapper[4959]: E0121 13:28:46.223263 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc94b98-4e88-4d64-9b27-7413d97810df" containerName="mariadb-account-create-update" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223271 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc94b98-4e88-4d64-9b27-7413d97810df" containerName="mariadb-account-create-update" Jan 21 13:28:46 crc kubenswrapper[4959]: E0121 13:28:46.223293 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="dnsmasq-dns" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223301 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="dnsmasq-dns" Jan 21 13:28:46 crc kubenswrapper[4959]: E0121 13:28:46.223315 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b320e7fa-a18b-46ff-be7b-1753fb60b768" containerName="mariadb-account-create-update" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223322 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b320e7fa-a18b-46ff-be7b-1753fb60b768" containerName="mariadb-account-create-update" Jan 21 13:28:46 crc kubenswrapper[4959]: E0121 13:28:46.223355 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="init" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223362 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="init" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223545 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="42032608-71eb-41ff-87a3-4a06780169ac" containerName="mariadb-database-create" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223560 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="361b792c-5aa3-4aae-867e-ed16ee1c6452" containerName="dnsmasq-dns" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223572 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc94b98-4e88-4d64-9b27-7413d97810df" containerName="mariadb-account-create-update" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.223587 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b320e7fa-a18b-46ff-be7b-1753fb60b768" containerName="mariadb-account-create-update" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.224204 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.226737 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cn2hl" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.228186 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.237551 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8244c"] Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.322162 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnr5x\" (UniqueName: \"kubernetes.io/projected/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-kube-api-access-wnr5x\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.322219 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-config-data\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.322266 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-combined-ca-bundle\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.322290 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-db-sync-config-data\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.424242 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnr5x\" (UniqueName: \"kubernetes.io/projected/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-kube-api-access-wnr5x\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.424285 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-config-data\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.424319 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-combined-ca-bundle\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.424337 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-db-sync-config-data\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.439353 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-db-sync-config-data\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.439406 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-combined-ca-bundle\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.441547 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-config-data\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.444760 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnr5x\" (UniqueName: \"kubernetes.io/projected/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-kube-api-access-wnr5x\") pod \"glance-db-sync-8244c\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " pod="openstack/glance-db-sync-8244c" Jan 21 13:28:46 crc kubenswrapper[4959]: I0121 13:28:46.549671 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8244c" Jan 21 13:28:47 crc kubenswrapper[4959]: I0121 13:28:47.089079 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8244c"] Jan 21 13:28:47 crc kubenswrapper[4959]: I0121 13:28:47.720539 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8244c" event={"ID":"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76","Type":"ContainerStarted","Data":"9fbb548898a65324ecf4d1d14da0b77416d81c8ad6a1ef5a89787a79d69a19d3"} Jan 21 13:28:47 crc kubenswrapper[4959]: I0121 13:28:47.911216 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nqz7q" podUID="77986c63-ba96-4c22-9b51-925c5b43b092" containerName="ovn-controller" probeResult="failure" output=< Jan 21 13:28:47 crc kubenswrapper[4959]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 13:28:47 crc kubenswrapper[4959]: > Jan 21 13:28:49 crc kubenswrapper[4959]: I0121 13:28:49.580460 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xchjg"] Jan 21 13:28:49 crc kubenswrapper[4959]: I0121 13:28:49.589419 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xchjg"] Jan 21 13:28:50 crc kubenswrapper[4959]: I0121 13:28:50.744407 4959 generic.go:334] "Generic (PLEG): container finished" podID="56f613f3-9dc0-438c-8232-190c680ab312" containerID="648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648" exitCode=0 Jan 21 13:28:50 crc kubenswrapper[4959]: I0121 13:28:50.744484 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56f613f3-9dc0-438c-8232-190c680ab312","Type":"ContainerDied","Data":"648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648"} Jan 21 13:28:51 crc kubenswrapper[4959]: I0121 13:28:51.297429 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc94b98-4e88-4d64-9b27-7413d97810df" path="/var/lib/kubelet/pods/9bc94b98-4e88-4d64-9b27-7413d97810df/volumes" Jan 21 13:28:51 crc kubenswrapper[4959]: I0121 13:28:51.754887 4959 generic.go:334] "Generic (PLEG): container finished" podID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerID="489878be51100bbc8edd0fe92d1f85d34e280d023ad9591d04ed79f7501bbf46" exitCode=0 Jan 21 13:28:51 crc kubenswrapper[4959]: I0121 13:28:51.754950 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2","Type":"ContainerDied","Data":"489878be51100bbc8edd0fe92d1f85d34e280d023ad9591d04ed79f7501bbf46"} Jan 21 13:28:51 crc kubenswrapper[4959]: I0121 13:28:51.777493 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56f613f3-9dc0-438c-8232-190c680ab312","Type":"ContainerStarted","Data":"01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744"} Jan 21 13:28:51 crc kubenswrapper[4959]: I0121 13:28:51.777753 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:28:51 crc kubenswrapper[4959]: I0121 13:28:51.858018 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.197068785 podStartE2EDuration="1m35.857995467s" podCreationTimestamp="2026-01-21 13:27:16 +0000 UTC" firstStartedPulling="2026-01-21 13:27:19.863234459 +0000 UTC m=+1100.826265002" lastFinishedPulling="2026-01-21 13:28:16.524161141 +0000 UTC m=+1157.487191684" observedRunningTime="2026-01-21 13:28:51.84595141 +0000 UTC m=+1192.808981963" watchObservedRunningTime="2026-01-21 13:28:51.857995467 +0000 UTC m=+1192.821026010" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.191035 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2","Type":"ContainerStarted","Data":"48154a5bea0cee803d85fa08d60fae7a604f5f44a50c741949174835b02f3dac"} Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.191751 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.223434 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nqz7q" podUID="77986c63-ba96-4c22-9b51-925c5b43b092" containerName="ovn-controller" probeResult="failure" output=< Jan 21 13:28:53 crc kubenswrapper[4959]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 13:28:53 crc kubenswrapper[4959]: > Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.245679 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.259966 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371939.59483 podStartE2EDuration="1m37.259946056s" podCreationTimestamp="2026-01-21 13:27:16 +0000 UTC" firstStartedPulling="2026-01-21 13:27:19.51331101 +0000 UTC m=+1100.476341553" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:53.24167117 +0000 UTC m=+1194.204701723" watchObservedRunningTime="2026-01-21 13:28:53.259946056 +0000 UTC m=+1194.222976599" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.268905 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-945nd" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.493678 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nqz7q-config-t6fk4"] Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.494949 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.499750 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.506414 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqz7q-config-t6fk4"] Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.584485 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run-ovn\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.584546 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-log-ovn\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.584617 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-additional-scripts\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.584714 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-scripts\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.584798 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.584925 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqt9\" (UniqueName: \"kubernetes.io/projected/7edcf5bf-7b7d-488c-a280-f6c17b57125a-kube-api-access-kxqt9\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.686377 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqt9\" (UniqueName: \"kubernetes.io/projected/7edcf5bf-7b7d-488c-a280-f6c17b57125a-kube-api-access-kxqt9\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.686486 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run-ovn\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.686513 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-log-ovn\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.686537 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-additional-scripts\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.686569 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-scripts\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.686606 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.687008 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.687103 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-log-ovn\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.687181 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run-ovn\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.687961 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-additional-scripts\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.689469 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-scripts\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.729943 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqt9\" (UniqueName: \"kubernetes.io/projected/7edcf5bf-7b7d-488c-a280-f6c17b57125a-kube-api-access-kxqt9\") pod \"ovn-controller-nqz7q-config-t6fk4\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:53 crc kubenswrapper[4959]: I0121 13:28:53.816008 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.562084 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqz7q-config-t6fk4"] Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.683052 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-79tqr"] Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.685411 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.690447 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.695239 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79tqr"] Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.837769 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltds\" (UniqueName: \"kubernetes.io/projected/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-kube-api-access-hltds\") pod \"root-account-create-update-79tqr\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.838830 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-operator-scripts\") pod \"root-account-create-update-79tqr\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.973041 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hltds\" (UniqueName: \"kubernetes.io/projected/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-kube-api-access-hltds\") pod \"root-account-create-update-79tqr\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.973104 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-operator-scripts\") pod \"root-account-create-update-79tqr\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:54 crc kubenswrapper[4959]: I0121 13:28:54.975259 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-operator-scripts\") pod \"root-account-create-update-79tqr\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:55 crc kubenswrapper[4959]: I0121 13:28:55.008288 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hltds\" (UniqueName: \"kubernetes.io/projected/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-kube-api-access-hltds\") pod \"root-account-create-update-79tqr\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:55 crc kubenswrapper[4959]: I0121 13:28:55.011548 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79tqr" Jan 21 13:28:55 crc kubenswrapper[4959]: I0121 13:28:55.252330 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqz7q-config-t6fk4" event={"ID":"7edcf5bf-7b7d-488c-a280-f6c17b57125a","Type":"ContainerStarted","Data":"28bc2414eaa1ea9e92c6e578ff58bff9e3c57340b2464bcaa4f57ce716f78ca8"} Jan 21 13:28:56 crc kubenswrapper[4959]: I0121 13:28:56.072435 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79tqr"] Jan 21 13:28:56 crc kubenswrapper[4959]: W0121 13:28:56.117212 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8078b737_6203_4e62_92aa_3eb8b3cfa4ed.slice/crio-2c9a6f59d8e403742c14a3550289a3affe126354a4c9d5277e1e461686d580f3 WatchSource:0}: Error finding container 2c9a6f59d8e403742c14a3550289a3affe126354a4c9d5277e1e461686d580f3: Status 404 returned error can't find the container with id 2c9a6f59d8e403742c14a3550289a3affe126354a4c9d5277e1e461686d580f3 Jan 21 13:28:56 crc kubenswrapper[4959]: I0121 13:28:56.268074 4959 generic.go:334] "Generic (PLEG): container finished" podID="7edcf5bf-7b7d-488c-a280-f6c17b57125a" containerID="6997fc58606eaf1282588ccb98c7f9e513bb3a0183edc9aae6c8bc0175080894" exitCode=0 Jan 21 13:28:56 crc kubenswrapper[4959]: I0121 13:28:56.268176 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqz7q-config-t6fk4" event={"ID":"7edcf5bf-7b7d-488c-a280-f6c17b57125a","Type":"ContainerDied","Data":"6997fc58606eaf1282588ccb98c7f9e513bb3a0183edc9aae6c8bc0175080894"} Jan 21 13:28:56 crc kubenswrapper[4959]: I0121 13:28:56.269676 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79tqr" event={"ID":"8078b737-6203-4e62-92aa-3eb8b3cfa4ed","Type":"ContainerStarted","Data":"2c9a6f59d8e403742c14a3550289a3affe126354a4c9d5277e1e461686d580f3"} Jan 21 13:28:57 crc kubenswrapper[4959]: I0121 13:28:57.281551 4959 generic.go:334] "Generic (PLEG): container finished" podID="8078b737-6203-4e62-92aa-3eb8b3cfa4ed" containerID="24d637ec626d553e179e45ff176c34dd6f687948fe9ebdfe66fb3470c8e24d8f" exitCode=0 Jan 21 13:28:57 crc kubenswrapper[4959]: I0121 13:28:57.281623 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79tqr" event={"ID":"8078b737-6203-4e62-92aa-3eb8b3cfa4ed","Type":"ContainerDied","Data":"24d637ec626d553e179e45ff176c34dd6f687948fe9ebdfe66fb3470c8e24d8f"} Jan 21 13:28:57 crc kubenswrapper[4959]: I0121 13:28:57.946486 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nqz7q" Jan 21 13:29:08 crc kubenswrapper[4959]: I0121 13:29:08.416478 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:29:08 crc kubenswrapper[4959]: I0121 13:29:08.432315 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 13:29:10 crc kubenswrapper[4959]: E0121 13:29:10.521261 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 13:29:10 crc kubenswrapper[4959]: E0121 13:29:10.521659 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnr5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-8244c_openstack(3fbabc2b-28b7-4e5e-b93c-96b9e3060c76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:29:10 crc kubenswrapper[4959]: E0121 13:29:10.522810 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-8244c" podUID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.575527 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79tqr" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.581776 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.594946 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z9994"] Jan 21 13:29:10 crc kubenswrapper[4959]: E0121 13:29:10.595431 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8078b737-6203-4e62-92aa-3eb8b3cfa4ed" containerName="mariadb-account-create-update" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.595449 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8078b737-6203-4e62-92aa-3eb8b3cfa4ed" containerName="mariadb-account-create-update" Jan 21 13:29:10 crc kubenswrapper[4959]: E0121 13:29:10.595479 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edcf5bf-7b7d-488c-a280-f6c17b57125a" containerName="ovn-config" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.595485 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edcf5bf-7b7d-488c-a280-f6c17b57125a" containerName="ovn-config" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.595635 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7edcf5bf-7b7d-488c-a280-f6c17b57125a" containerName="ovn-config" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.595647 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8078b737-6203-4e62-92aa-3eb8b3cfa4ed" containerName="mariadb-account-create-update" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.596517 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.615872 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z9994"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.637768 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79tqr" event={"ID":"8078b737-6203-4e62-92aa-3eb8b3cfa4ed","Type":"ContainerDied","Data":"2c9a6f59d8e403742c14a3550289a3affe126354a4c9d5277e1e461686d580f3"} Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.637804 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c9a6f59d8e403742c14a3550289a3affe126354a4c9d5277e1e461686d580f3" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.637804 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79tqr" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.653882 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqz7q-config-t6fk4" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.654065 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqz7q-config-t6fk4" event={"ID":"7edcf5bf-7b7d-488c-a280-f6c17b57125a","Type":"ContainerDied","Data":"28bc2414eaa1ea9e92c6e578ff58bff9e3c57340b2464bcaa4f57ce716f78ca8"} Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.654105 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28bc2414eaa1ea9e92c6e578ff58bff9e3c57340b2464bcaa4f57ce716f78ca8" Jan 21 13:29:10 crc kubenswrapper[4959]: E0121 13:29:10.654445 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-8244c" podUID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.696216 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-td4zg"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.697329 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.716031 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4ddb-account-create-update-jnfns"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.717302 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.723901 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-additional-scripts\") pod \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.723963 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-operator-scripts\") pod \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.723997 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hltds\" (UniqueName: \"kubernetes.io/projected/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-kube-api-access-hltds\") pod \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\" (UID: \"8078b737-6203-4e62-92aa-3eb8b3cfa4ed\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724028 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run\") pod \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724072 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqt9\" (UniqueName: \"kubernetes.io/projected/7edcf5bf-7b7d-488c-a280-f6c17b57125a-kube-api-access-kxqt9\") pod \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724126 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-scripts\") pod \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724179 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run-ovn\") pod \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724204 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-log-ovn\") pod \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\" (UID: \"7edcf5bf-7b7d-488c-a280-f6c17b57125a\") " Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724553 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqjh\" (UniqueName: \"kubernetes.io/projected/626ad741-9147-4903-a245-1728d168ded5-kube-api-access-cmqjh\") pod \"barbican-db-create-z9994\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724705 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/626ad741-9147-4903-a245-1728d168ded5-operator-scripts\") pod \"barbican-db-create-z9994\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.724026 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.725508 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run" (OuterVolumeSpecName: "var-run") pod "7edcf5bf-7b7d-488c-a280-f6c17b57125a" (UID: "7edcf5bf-7b7d-488c-a280-f6c17b57125a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.725339 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7edcf5bf-7b7d-488c-a280-f6c17b57125a" (UID: "7edcf5bf-7b7d-488c-a280-f6c17b57125a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.725804 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7edcf5bf-7b7d-488c-a280-f6c17b57125a" (UID: "7edcf5bf-7b7d-488c-a280-f6c17b57125a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.725843 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7edcf5bf-7b7d-488c-a280-f6c17b57125a" (UID: "7edcf5bf-7b7d-488c-a280-f6c17b57125a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.726052 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-scripts" (OuterVolumeSpecName: "scripts") pod "7edcf5bf-7b7d-488c-a280-f6c17b57125a" (UID: "7edcf5bf-7b7d-488c-a280-f6c17b57125a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.726140 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8078b737-6203-4e62-92aa-3eb8b3cfa4ed" (UID: "8078b737-6203-4e62-92aa-3eb8b3cfa4ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.744509 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-td4zg"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.744580 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7edcf5bf-7b7d-488c-a280-f6c17b57125a-kube-api-access-kxqt9" (OuterVolumeSpecName: "kube-api-access-kxqt9") pod "7edcf5bf-7b7d-488c-a280-f6c17b57125a" (UID: "7edcf5bf-7b7d-488c-a280-f6c17b57125a"). InnerVolumeSpecName "kube-api-access-kxqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.744752 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-kube-api-access-hltds" (OuterVolumeSpecName: "kube-api-access-hltds") pod "8078b737-6203-4e62-92aa-3eb8b3cfa4ed" (UID: "8078b737-6203-4e62-92aa-3eb8b3cfa4ed"). InnerVolumeSpecName "kube-api-access-hltds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.767438 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4ddb-account-create-update-jnfns"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.825821 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpjq9\" (UniqueName: \"kubernetes.io/projected/dfba12bc-c2c4-46de-9e89-9887d635f4fb-kube-api-access-fpjq9\") pod \"barbican-4ddb-account-create-update-jnfns\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.825869 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/626ad741-9147-4903-a245-1728d168ded5-operator-scripts\") pod \"barbican-db-create-z9994\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.825915 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fbc83ca-99db-446e-972f-3cd8831575be-operator-scripts\") pod \"cinder-db-create-td4zg\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.825953 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j969c\" (UniqueName: \"kubernetes.io/projected/7fbc83ca-99db-446e-972f-3cd8831575be-kube-api-access-j969c\") pod \"cinder-db-create-td4zg\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826020 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqjh\" (UniqueName: \"kubernetes.io/projected/626ad741-9147-4903-a245-1728d168ded5-kube-api-access-cmqjh\") pod \"barbican-db-create-z9994\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826036 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfba12bc-c2c4-46de-9e89-9887d635f4fb-operator-scripts\") pod \"barbican-4ddb-account-create-update-jnfns\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826075 4959 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826086 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqt9\" (UniqueName: \"kubernetes.io/projected/7edcf5bf-7b7d-488c-a280-f6c17b57125a-kube-api-access-kxqt9\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826114 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826123 4959 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826146 4959 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7edcf5bf-7b7d-488c-a280-f6c17b57125a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826157 4959 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf5bf-7b7d-488c-a280-f6c17b57125a-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826168 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.826179 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hltds\" (UniqueName: \"kubernetes.io/projected/8078b737-6203-4e62-92aa-3eb8b3cfa4ed-kube-api-access-hltds\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.827239 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/626ad741-9147-4903-a245-1728d168ded5-operator-scripts\") pod \"barbican-db-create-z9994\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.849029 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqjh\" (UniqueName: \"kubernetes.io/projected/626ad741-9147-4903-a245-1728d168ded5-kube-api-access-cmqjh\") pod \"barbican-db-create-z9994\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.890119 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9e31-account-create-update-ddns8"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.891114 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.892773 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.897329 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rmpkw"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.898773 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.905088 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9e31-account-create-update-ddns8"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.927663 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rmpkw"] Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.930661 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfba12bc-c2c4-46de-9e89-9887d635f4fb-operator-scripts\") pod \"barbican-4ddb-account-create-update-jnfns\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.930727 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpjq9\" (UniqueName: \"kubernetes.io/projected/dfba12bc-c2c4-46de-9e89-9887d635f4fb-kube-api-access-fpjq9\") pod \"barbican-4ddb-account-create-update-jnfns\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.930779 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fbc83ca-99db-446e-972f-3cd8831575be-operator-scripts\") pod \"cinder-db-create-td4zg\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.930817 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j969c\" (UniqueName: \"kubernetes.io/projected/7fbc83ca-99db-446e-972f-3cd8831575be-kube-api-access-j969c\") pod \"cinder-db-create-td4zg\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.931759 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfba12bc-c2c4-46de-9e89-9887d635f4fb-operator-scripts\") pod \"barbican-4ddb-account-create-update-jnfns\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.932430 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fbc83ca-99db-446e-972f-3cd8831575be-operator-scripts\") pod \"cinder-db-create-td4zg\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.933305 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9994" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.974699 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpjq9\" (UniqueName: \"kubernetes.io/projected/dfba12bc-c2c4-46de-9e89-9887d635f4fb-kube-api-access-fpjq9\") pod \"barbican-4ddb-account-create-update-jnfns\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:10 crc kubenswrapper[4959]: I0121 13:29:10.988764 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j969c\" (UniqueName: \"kubernetes.io/projected/7fbc83ca-99db-446e-972f-3cd8831575be-kube-api-access-j969c\") pod \"cinder-db-create-td4zg\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.028580 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-s5xcr"] Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.029464 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.029897 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.032418 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cg8\" (UniqueName: \"kubernetes.io/projected/e8f7b344-15e3-4604-bdf0-b40a33752eac-kube-api-access-f5cg8\") pod \"neutron-db-create-rmpkw\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.032484 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qs9\" (UniqueName: \"kubernetes.io/projected/49716e38-daf2-4411-aa24-061680a0bcbd-kube-api-access-t7qs9\") pod \"neutron-9e31-account-create-update-ddns8\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.032556 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49716e38-daf2-4411-aa24-061680a0bcbd-operator-scripts\") pod \"neutron-9e31-account-create-update-ddns8\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.032573 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f7b344-15e3-4604-bdf0-b40a33752eac-operator-scripts\") pod \"neutron-db-create-rmpkw\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.033443 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.033735 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.033799 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9w7kg" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.033743 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.036225 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-s5xcr"] Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.061756 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5ae-account-create-update-gldcv"] Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.062841 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.065001 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.069648 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5ae-account-create-update-gldcv"] Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.083555 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.133743 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f2316c-ad55-4f35-9063-82708a99e69f-operator-scripts\") pod \"cinder-b5ae-account-create-update-gldcv\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.133803 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-combined-ca-bundle\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.133833 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtn7\" (UniqueName: \"kubernetes.io/projected/8abfd29e-86a3-448f-b722-c98d11933e6c-kube-api-access-4jtn7\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.133995 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49716e38-daf2-4411-aa24-061680a0bcbd-operator-scripts\") pod \"neutron-9e31-account-create-update-ddns8\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134042 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f7b344-15e3-4604-bdf0-b40a33752eac-operator-scripts\") pod \"neutron-db-create-rmpkw\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134160 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qf7r\" (UniqueName: \"kubernetes.io/projected/47f2316c-ad55-4f35-9063-82708a99e69f-kube-api-access-5qf7r\") pod \"cinder-b5ae-account-create-update-gldcv\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134239 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cg8\" (UniqueName: \"kubernetes.io/projected/e8f7b344-15e3-4604-bdf0-b40a33752eac-kube-api-access-f5cg8\") pod \"neutron-db-create-rmpkw\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134337 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qs9\" (UniqueName: \"kubernetes.io/projected/49716e38-daf2-4411-aa24-061680a0bcbd-kube-api-access-t7qs9\") pod \"neutron-9e31-account-create-update-ddns8\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134412 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-config-data\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134805 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49716e38-daf2-4411-aa24-061680a0bcbd-operator-scripts\") pod \"neutron-9e31-account-create-update-ddns8\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.134832 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f7b344-15e3-4604-bdf0-b40a33752eac-operator-scripts\") pod \"neutron-db-create-rmpkw\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.159722 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cg8\" (UniqueName: \"kubernetes.io/projected/e8f7b344-15e3-4604-bdf0-b40a33752eac-kube-api-access-f5cg8\") pod \"neutron-db-create-rmpkw\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.160108 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qs9\" (UniqueName: \"kubernetes.io/projected/49716e38-daf2-4411-aa24-061680a0bcbd-kube-api-access-t7qs9\") pod \"neutron-9e31-account-create-update-ddns8\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.212432 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.219662 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.235589 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-config-data\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.235664 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f2316c-ad55-4f35-9063-82708a99e69f-operator-scripts\") pod \"cinder-b5ae-account-create-update-gldcv\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.235690 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-combined-ca-bundle\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.235707 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtn7\" (UniqueName: \"kubernetes.io/projected/8abfd29e-86a3-448f-b722-c98d11933e6c-kube-api-access-4jtn7\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.235759 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qf7r\" (UniqueName: \"kubernetes.io/projected/47f2316c-ad55-4f35-9063-82708a99e69f-kube-api-access-5qf7r\") pod \"cinder-b5ae-account-create-update-gldcv\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.240834 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-config-data\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.241553 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f2316c-ad55-4f35-9063-82708a99e69f-operator-scripts\") pod \"cinder-b5ae-account-create-update-gldcv\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.244302 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-combined-ca-bundle\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.312516 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtn7\" (UniqueName: \"kubernetes.io/projected/8abfd29e-86a3-448f-b722-c98d11933e6c-kube-api-access-4jtn7\") pod \"keystone-db-sync-s5xcr\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.315909 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qf7r\" (UniqueName: \"kubernetes.io/projected/47f2316c-ad55-4f35-9063-82708a99e69f-kube-api-access-5qf7r\") pod \"cinder-b5ae-account-create-update-gldcv\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.357115 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.377084 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.717740 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z9994"] Jan 21 13:29:11 crc kubenswrapper[4959]: W0121 13:29:11.728980 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod626ad741_9147_4903_a245_1728d168ded5.slice/crio-ec0e9c5e1e95a9768f275f992297571fa1c2597d7a16c7ea93f19799c678bbdc WatchSource:0}: Error finding container ec0e9c5e1e95a9768f275f992297571fa1c2597d7a16c7ea93f19799c678bbdc: Status 404 returned error can't find the container with id ec0e9c5e1e95a9768f275f992297571fa1c2597d7a16c7ea93f19799c678bbdc Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.777245 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nqz7q-config-t6fk4"] Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.799160 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nqz7q-config-t6fk4"] Jan 21 13:29:11 crc kubenswrapper[4959]: I0121 13:29:11.823466 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4ddb-account-create-update-jnfns"] Jan 21 13:29:11 crc kubenswrapper[4959]: W0121 13:29:11.850549 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfba12bc_c2c4_46de_9e89_9887d635f4fb.slice/crio-4b67d438fcd3c3f81ae68e71a9d123a4ca202941c15d541dc57f1f038874dd6b WatchSource:0}: Error finding container 4b67d438fcd3c3f81ae68e71a9d123a4ca202941c15d541dc57f1f038874dd6b: Status 404 returned error can't find the container with id 4b67d438fcd3c3f81ae68e71a9d123a4ca202941c15d541dc57f1f038874dd6b Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.172610 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-td4zg"] Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.192269 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9e31-account-create-update-ddns8"] Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.347338 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-s5xcr"] Jan 21 13:29:12 crc kubenswrapper[4959]: W0121 13:29:12.350772 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47f2316c_ad55_4f35_9063_82708a99e69f.slice/crio-cd44a9e3d36f0025ac8c64f14fa3af91a1ed2bc2b67ccd02640e74fb099f9657 WatchSource:0}: Error finding container cd44a9e3d36f0025ac8c64f14fa3af91a1ed2bc2b67ccd02640e74fb099f9657: Status 404 returned error can't find the container with id cd44a9e3d36f0025ac8c64f14fa3af91a1ed2bc2b67ccd02640e74fb099f9657 Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.360498 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5ae-account-create-update-gldcv"] Jan 21 13:29:12 crc kubenswrapper[4959]: W0121 13:29:12.385393 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8abfd29e_86a3_448f_b722_c98d11933e6c.slice/crio-9325a6945dbdfde63dbea04ed12753cef52b47c43749fac5205b1e2cb92e1efe WatchSource:0}: Error finding container 9325a6945dbdfde63dbea04ed12753cef52b47c43749fac5205b1e2cb92e1efe: Status 404 returned error can't find the container with id 9325a6945dbdfde63dbea04ed12753cef52b47c43749fac5205b1e2cb92e1efe Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.506475 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rmpkw"] Jan 21 13:29:12 crc kubenswrapper[4959]: W0121 13:29:12.508541 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f7b344_15e3_4604_bdf0_b40a33752eac.slice/crio-29919d043ae670f331ee88a8239b02622460ac2c38b62d15f9e432c3f907a638 WatchSource:0}: Error finding container 29919d043ae670f331ee88a8239b02622460ac2c38b62d15f9e432c3f907a638: Status 404 returned error can't find the container with id 29919d043ae670f331ee88a8239b02622460ac2c38b62d15f9e432c3f907a638 Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.710286 4959 generic.go:334] "Generic (PLEG): container finished" podID="dfba12bc-c2c4-46de-9e89-9887d635f4fb" containerID="9e776dbdbc5d96464ac5a830c7d94ada4877595d5a305d4e2044a837a1b45cab" exitCode=0 Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.710368 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4ddb-account-create-update-jnfns" event={"ID":"dfba12bc-c2c4-46de-9e89-9887d635f4fb","Type":"ContainerDied","Data":"9e776dbdbc5d96464ac5a830c7d94ada4877595d5a305d4e2044a837a1b45cab"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.710396 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4ddb-account-create-update-jnfns" event={"ID":"dfba12bc-c2c4-46de-9e89-9887d635f4fb","Type":"ContainerStarted","Data":"4b67d438fcd3c3f81ae68e71a9d123a4ca202941c15d541dc57f1f038874dd6b"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.711895 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rmpkw" event={"ID":"e8f7b344-15e3-4604-bdf0-b40a33752eac","Type":"ContainerStarted","Data":"29919d043ae670f331ee88a8239b02622460ac2c38b62d15f9e432c3f907a638"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.713822 4959 generic.go:334] "Generic (PLEG): container finished" podID="7fbc83ca-99db-446e-972f-3cd8831575be" containerID="f356c1d615ff799464c9a8b5ae16d0e9b212376c5bad972a1d1f91d336383419" exitCode=0 Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.713908 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-td4zg" event={"ID":"7fbc83ca-99db-446e-972f-3cd8831575be","Type":"ContainerDied","Data":"f356c1d615ff799464c9a8b5ae16d0e9b212376c5bad972a1d1f91d336383419"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.713938 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-td4zg" event={"ID":"7fbc83ca-99db-446e-972f-3cd8831575be","Type":"ContainerStarted","Data":"ebf08b822e8f5c6d4d08a6695b5f4a1611b0ef6b522eae0a3eb473e040227f05"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.716057 4959 generic.go:334] "Generic (PLEG): container finished" podID="626ad741-9147-4903-a245-1728d168ded5" containerID="3e14f0f27ed1d9bdf769bd92a53fee7e33860ce25510342fa66257c45f6cc066" exitCode=0 Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.716127 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z9994" event={"ID":"626ad741-9147-4903-a245-1728d168ded5","Type":"ContainerDied","Data":"3e14f0f27ed1d9bdf769bd92a53fee7e33860ce25510342fa66257c45f6cc066"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.716241 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z9994" event={"ID":"626ad741-9147-4903-a245-1728d168ded5","Type":"ContainerStarted","Data":"ec0e9c5e1e95a9768f275f992297571fa1c2597d7a16c7ea93f19799c678bbdc"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.717443 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5ae-account-create-update-gldcv" event={"ID":"47f2316c-ad55-4f35-9063-82708a99e69f","Type":"ContainerStarted","Data":"cd44a9e3d36f0025ac8c64f14fa3af91a1ed2bc2b67ccd02640e74fb099f9657"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.719052 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s5xcr" event={"ID":"8abfd29e-86a3-448f-b722-c98d11933e6c","Type":"ContainerStarted","Data":"9325a6945dbdfde63dbea04ed12753cef52b47c43749fac5205b1e2cb92e1efe"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.720777 4959 generic.go:334] "Generic (PLEG): container finished" podID="49716e38-daf2-4411-aa24-061680a0bcbd" containerID="73390ed88a59e75822991db5f56eff4eff2cf86910e860cd9b3ac570647f5c13" exitCode=0 Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.720815 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e31-account-create-update-ddns8" event={"ID":"49716e38-daf2-4411-aa24-061680a0bcbd","Type":"ContainerDied","Data":"73390ed88a59e75822991db5f56eff4eff2cf86910e860cd9b3ac570647f5c13"} Jan 21 13:29:12 crc kubenswrapper[4959]: I0121 13:29:12.720838 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e31-account-create-update-ddns8" event={"ID":"49716e38-daf2-4411-aa24-061680a0bcbd","Type":"ContainerStarted","Data":"6e9a160845cc481e94b527e7a627a82112bc0f1b8a5d110dc195aeb9487414fb"} Jan 21 13:29:13 crc kubenswrapper[4959]: I0121 13:29:13.303307 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7edcf5bf-7b7d-488c-a280-f6c17b57125a" path="/var/lib/kubelet/pods/7edcf5bf-7b7d-488c-a280-f6c17b57125a/volumes" Jan 21 13:29:13 crc kubenswrapper[4959]: I0121 13:29:13.734487 4959 generic.go:334] "Generic (PLEG): container finished" podID="47f2316c-ad55-4f35-9063-82708a99e69f" containerID="0e8c3794ac3e0611513722d6ab8fb02d862d0237755edc59fd8ab8776e9f4712" exitCode=0 Jan 21 13:29:13 crc kubenswrapper[4959]: I0121 13:29:13.734553 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5ae-account-create-update-gldcv" event={"ID":"47f2316c-ad55-4f35-9063-82708a99e69f","Type":"ContainerDied","Data":"0e8c3794ac3e0611513722d6ab8fb02d862d0237755edc59fd8ab8776e9f4712"} Jan 21 13:29:13 crc kubenswrapper[4959]: I0121 13:29:13.740334 4959 generic.go:334] "Generic (PLEG): container finished" podID="e8f7b344-15e3-4604-bdf0-b40a33752eac" containerID="9dbd450ffa0594e436a80325bb5d9548dcea49385caf37032bac70e54dc74564" exitCode=0 Jan 21 13:29:13 crc kubenswrapper[4959]: I0121 13:29:13.740468 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rmpkw" event={"ID":"e8f7b344-15e3-4604-bdf0-b40a33752eac","Type":"ContainerDied","Data":"9dbd450ffa0594e436a80325bb5d9548dcea49385caf37032bac70e54dc74564"} Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.280954 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.421458 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9994" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.426742 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpjq9\" (UniqueName: \"kubernetes.io/projected/dfba12bc-c2c4-46de-9e89-9887d635f4fb-kube-api-access-fpjq9\") pod \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.426897 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfba12bc-c2c4-46de-9e89-9887d635f4fb-operator-scripts\") pod \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\" (UID: \"dfba12bc-c2c4-46de-9e89-9887d635f4fb\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.427646 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.428062 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfba12bc-c2c4-46de-9e89-9887d635f4fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfba12bc-c2c4-46de-9e89-9887d635f4fb" (UID: "dfba12bc-c2c4-46de-9e89-9887d635f4fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.428705 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfba12bc-c2c4-46de-9e89-9887d635f4fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.437811 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfba12bc-c2c4-46de-9e89-9887d635f4fb-kube-api-access-fpjq9" (OuterVolumeSpecName: "kube-api-access-fpjq9") pod "dfba12bc-c2c4-46de-9e89-9887d635f4fb" (UID: "dfba12bc-c2c4-46de-9e89-9887d635f4fb"). InnerVolumeSpecName "kube-api-access-fpjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.445770 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.529376 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmqjh\" (UniqueName: \"kubernetes.io/projected/626ad741-9147-4903-a245-1728d168ded5-kube-api-access-cmqjh\") pod \"626ad741-9147-4903-a245-1728d168ded5\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.529517 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/626ad741-9147-4903-a245-1728d168ded5-operator-scripts\") pod \"626ad741-9147-4903-a245-1728d168ded5\" (UID: \"626ad741-9147-4903-a245-1728d168ded5\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.529650 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49716e38-daf2-4411-aa24-061680a0bcbd-operator-scripts\") pod \"49716e38-daf2-4411-aa24-061680a0bcbd\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.529694 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qs9\" (UniqueName: \"kubernetes.io/projected/49716e38-daf2-4411-aa24-061680a0bcbd-kube-api-access-t7qs9\") pod \"49716e38-daf2-4411-aa24-061680a0bcbd\" (UID: \"49716e38-daf2-4411-aa24-061680a0bcbd\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.530188 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpjq9\" (UniqueName: \"kubernetes.io/projected/dfba12bc-c2c4-46de-9e89-9887d635f4fb-kube-api-access-fpjq9\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.530223 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49716e38-daf2-4411-aa24-061680a0bcbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49716e38-daf2-4411-aa24-061680a0bcbd" (UID: "49716e38-daf2-4411-aa24-061680a0bcbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.530866 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626ad741-9147-4903-a245-1728d168ded5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "626ad741-9147-4903-a245-1728d168ded5" (UID: "626ad741-9147-4903-a245-1728d168ded5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.534303 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626ad741-9147-4903-a245-1728d168ded5-kube-api-access-cmqjh" (OuterVolumeSpecName: "kube-api-access-cmqjh") pod "626ad741-9147-4903-a245-1728d168ded5" (UID: "626ad741-9147-4903-a245-1728d168ded5"). InnerVolumeSpecName "kube-api-access-cmqjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.535840 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49716e38-daf2-4411-aa24-061680a0bcbd-kube-api-access-t7qs9" (OuterVolumeSpecName: "kube-api-access-t7qs9") pod "49716e38-daf2-4411-aa24-061680a0bcbd" (UID: "49716e38-daf2-4411-aa24-061680a0bcbd"). InnerVolumeSpecName "kube-api-access-t7qs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631073 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fbc83ca-99db-446e-972f-3cd8831575be-operator-scripts\") pod \"7fbc83ca-99db-446e-972f-3cd8831575be\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631315 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j969c\" (UniqueName: \"kubernetes.io/projected/7fbc83ca-99db-446e-972f-3cd8831575be-kube-api-access-j969c\") pod \"7fbc83ca-99db-446e-972f-3cd8831575be\" (UID: \"7fbc83ca-99db-446e-972f-3cd8831575be\") " Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631577 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fbc83ca-99db-446e-972f-3cd8831575be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fbc83ca-99db-446e-972f-3cd8831575be" (UID: "7fbc83ca-99db-446e-972f-3cd8831575be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631760 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/626ad741-9147-4903-a245-1728d168ded5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631776 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49716e38-daf2-4411-aa24-061680a0bcbd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631788 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qs9\" (UniqueName: \"kubernetes.io/projected/49716e38-daf2-4411-aa24-061680a0bcbd-kube-api-access-t7qs9\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631801 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmqjh\" (UniqueName: \"kubernetes.io/projected/626ad741-9147-4903-a245-1728d168ded5-kube-api-access-cmqjh\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.631812 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fbc83ca-99db-446e-972f-3cd8831575be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.634976 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbc83ca-99db-446e-972f-3cd8831575be-kube-api-access-j969c" (OuterVolumeSpecName: "kube-api-access-j969c") pod "7fbc83ca-99db-446e-972f-3cd8831575be" (UID: "7fbc83ca-99db-446e-972f-3cd8831575be"). InnerVolumeSpecName "kube-api-access-j969c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.733956 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j969c\" (UniqueName: \"kubernetes.io/projected/7fbc83ca-99db-446e-972f-3cd8831575be-kube-api-access-j969c\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.750706 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z9994" event={"ID":"626ad741-9147-4903-a245-1728d168ded5","Type":"ContainerDied","Data":"ec0e9c5e1e95a9768f275f992297571fa1c2597d7a16c7ea93f19799c678bbdc"} Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.750742 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9994" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.750748 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0e9c5e1e95a9768f275f992297571fa1c2597d7a16c7ea93f19799c678bbdc" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.752865 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9e31-account-create-update-ddns8" event={"ID":"49716e38-daf2-4411-aa24-061680a0bcbd","Type":"ContainerDied","Data":"6e9a160845cc481e94b527e7a627a82112bc0f1b8a5d110dc195aeb9487414fb"} Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.752883 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9e31-account-create-update-ddns8" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.752891 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9a160845cc481e94b527e7a627a82112bc0f1b8a5d110dc195aeb9487414fb" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.754865 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4ddb-account-create-update-jnfns" event={"ID":"dfba12bc-c2c4-46de-9e89-9887d635f4fb","Type":"ContainerDied","Data":"4b67d438fcd3c3f81ae68e71a9d123a4ca202941c15d541dc57f1f038874dd6b"} Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.754888 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ddb-account-create-update-jnfns" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.754894 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b67d438fcd3c3f81ae68e71a9d123a4ca202941c15d541dc57f1f038874dd6b" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.756751 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-td4zg" event={"ID":"7fbc83ca-99db-446e-972f-3cd8831575be","Type":"ContainerDied","Data":"ebf08b822e8f5c6d4d08a6695b5f4a1611b0ef6b522eae0a3eb473e040227f05"} Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.756794 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf08b822e8f5c6d4d08a6695b5f4a1611b0ef6b522eae0a3eb473e040227f05" Jan 21 13:29:14 crc kubenswrapper[4959]: I0121 13:29:14.756871 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-td4zg" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.042154 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.086762 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.140725 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5cg8\" (UniqueName: \"kubernetes.io/projected/e8f7b344-15e3-4604-bdf0-b40a33752eac-kube-api-access-f5cg8\") pod \"e8f7b344-15e3-4604-bdf0-b40a33752eac\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.140852 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f7b344-15e3-4604-bdf0-b40a33752eac-operator-scripts\") pod \"e8f7b344-15e3-4604-bdf0-b40a33752eac\" (UID: \"e8f7b344-15e3-4604-bdf0-b40a33752eac\") " Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.142009 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f7b344-15e3-4604-bdf0-b40a33752eac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8f7b344-15e3-4604-bdf0-b40a33752eac" (UID: "e8f7b344-15e3-4604-bdf0-b40a33752eac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.142476 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8f7b344-15e3-4604-bdf0-b40a33752eac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.147996 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f7b344-15e3-4604-bdf0-b40a33752eac-kube-api-access-f5cg8" (OuterVolumeSpecName: "kube-api-access-f5cg8") pod "e8f7b344-15e3-4604-bdf0-b40a33752eac" (UID: "e8f7b344-15e3-4604-bdf0-b40a33752eac"). InnerVolumeSpecName "kube-api-access-f5cg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.243901 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qf7r\" (UniqueName: \"kubernetes.io/projected/47f2316c-ad55-4f35-9063-82708a99e69f-kube-api-access-5qf7r\") pod \"47f2316c-ad55-4f35-9063-82708a99e69f\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.244241 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f2316c-ad55-4f35-9063-82708a99e69f-operator-scripts\") pod \"47f2316c-ad55-4f35-9063-82708a99e69f\" (UID: \"47f2316c-ad55-4f35-9063-82708a99e69f\") " Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.244743 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f2316c-ad55-4f35-9063-82708a99e69f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47f2316c-ad55-4f35-9063-82708a99e69f" (UID: "47f2316c-ad55-4f35-9063-82708a99e69f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.245220 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f2316c-ad55-4f35-9063-82708a99e69f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.245238 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5cg8\" (UniqueName: \"kubernetes.io/projected/e8f7b344-15e3-4604-bdf0-b40a33752eac-kube-api-access-f5cg8\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.247024 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f2316c-ad55-4f35-9063-82708a99e69f-kube-api-access-5qf7r" (OuterVolumeSpecName: "kube-api-access-5qf7r") pod "47f2316c-ad55-4f35-9063-82708a99e69f" (UID: "47f2316c-ad55-4f35-9063-82708a99e69f"). InnerVolumeSpecName "kube-api-access-5qf7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.347276 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qf7r\" (UniqueName: \"kubernetes.io/projected/47f2316c-ad55-4f35-9063-82708a99e69f-kube-api-access-5qf7r\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.773682 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rmpkw" event={"ID":"e8f7b344-15e3-4604-bdf0-b40a33752eac","Type":"ContainerDied","Data":"29919d043ae670f331ee88a8239b02622460ac2c38b62d15f9e432c3f907a638"} Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.773723 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29919d043ae670f331ee88a8239b02622460ac2c38b62d15f9e432c3f907a638" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.773785 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmpkw" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.776063 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5ae-account-create-update-gldcv" event={"ID":"47f2316c-ad55-4f35-9063-82708a99e69f","Type":"ContainerDied","Data":"cd44a9e3d36f0025ac8c64f14fa3af91a1ed2bc2b67ccd02640e74fb099f9657"} Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.776311 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd44a9e3d36f0025ac8c64f14fa3af91a1ed2bc2b67ccd02640e74fb099f9657" Jan 21 13:29:15 crc kubenswrapper[4959]: I0121 13:29:15.776357 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5ae-account-create-update-gldcv" Jan 21 13:29:24 crc kubenswrapper[4959]: E0121 13:29:24.225271 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 21 13:29:24 crc kubenswrapper[4959]: E0121 13:29:24.225983 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jtn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-s5xcr_openstack(8abfd29e-86a3-448f-b722-c98d11933e6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:29:24 crc kubenswrapper[4959]: E0121 13:29:24.227194 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-s5xcr" podUID="8abfd29e-86a3-448f-b722-c98d11933e6c" Jan 21 13:29:24 crc kubenswrapper[4959]: I0121 13:29:24.850270 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8244c" event={"ID":"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76","Type":"ContainerStarted","Data":"ad697d6dcefd2ad9205e228d8a5b0d5769bcdad098502f232d1e457733622bc4"} Jan 21 13:29:24 crc kubenswrapper[4959]: E0121 13:29:24.851159 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-s5xcr" podUID="8abfd29e-86a3-448f-b722-c98d11933e6c" Jan 21 13:29:24 crc kubenswrapper[4959]: I0121 13:29:24.887989 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8244c" podStartSLOduration=1.721779287 podStartE2EDuration="38.887967791s" podCreationTimestamp="2026-01-21 13:28:46 +0000 UTC" firstStartedPulling="2026-01-21 13:28:47.095743498 +0000 UTC m=+1188.058774041" lastFinishedPulling="2026-01-21 13:29:24.261932002 +0000 UTC m=+1225.224962545" observedRunningTime="2026-01-21 13:29:24.881478576 +0000 UTC m=+1225.844509139" watchObservedRunningTime="2026-01-21 13:29:24.887967791 +0000 UTC m=+1225.850998334" Jan 21 13:29:30 crc kubenswrapper[4959]: I0121 13:29:30.918788 4959 generic.go:334] "Generic (PLEG): container finished" podID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" containerID="ad697d6dcefd2ad9205e228d8a5b0d5769bcdad098502f232d1e457733622bc4" exitCode=0 Jan 21 13:29:30 crc kubenswrapper[4959]: I0121 13:29:30.918896 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8244c" event={"ID":"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76","Type":"ContainerDied","Data":"ad697d6dcefd2ad9205e228d8a5b0d5769bcdad098502f232d1e457733622bc4"} Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.265646 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8244c" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.397562 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-db-sync-config-data\") pod \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.397641 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnr5x\" (UniqueName: \"kubernetes.io/projected/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-kube-api-access-wnr5x\") pod \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.397766 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-combined-ca-bundle\") pod \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.397818 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-config-data\") pod \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\" (UID: \"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76\") " Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.403458 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" (UID: "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.405345 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-kube-api-access-wnr5x" (OuterVolumeSpecName: "kube-api-access-wnr5x") pod "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" (UID: "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76"). InnerVolumeSpecName "kube-api-access-wnr5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.420353 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" (UID: "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.439529 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-config-data" (OuterVolumeSpecName: "config-data") pod "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" (UID: "3fbabc2b-28b7-4e5e-b93c-96b9e3060c76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.499973 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.500020 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.500032 4959 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.500044 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnr5x\" (UniqueName: \"kubernetes.io/projected/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76-kube-api-access-wnr5x\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.939120 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8244c" event={"ID":"3fbabc2b-28b7-4e5e-b93c-96b9e3060c76","Type":"ContainerDied","Data":"9fbb548898a65324ecf4d1d14da0b77416d81c8ad6a1ef5a89787a79d69a19d3"} Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.939426 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8244c" Jan 21 13:29:32 crc kubenswrapper[4959]: I0121 13:29:32.939449 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fbb548898a65324ecf4d1d14da0b77416d81c8ad6a1ef5a89787a79d69a19d3" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.547849 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-7r8cd"] Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548241 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f2316c-ad55-4f35-9063-82708a99e69f" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548261 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f2316c-ad55-4f35-9063-82708a99e69f" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548280 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbc83ca-99db-446e-972f-3cd8831575be" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548289 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbc83ca-99db-446e-972f-3cd8831575be" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548305 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfba12bc-c2c4-46de-9e89-9887d635f4fb" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548313 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfba12bc-c2c4-46de-9e89-9887d635f4fb" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548320 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626ad741-9147-4903-a245-1728d168ded5" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548328 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="626ad741-9147-4903-a245-1728d168ded5" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548336 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49716e38-daf2-4411-aa24-061680a0bcbd" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548343 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="49716e38-daf2-4411-aa24-061680a0bcbd" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548365 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f7b344-15e3-4604-bdf0-b40a33752eac" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548375 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f7b344-15e3-4604-bdf0-b40a33752eac" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: E0121 13:29:33.548391 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" containerName="glance-db-sync" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548397 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" containerName="glance-db-sync" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548523 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" containerName="glance-db-sync" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548534 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbc83ca-99db-446e-972f-3cd8831575be" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548543 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="49716e38-daf2-4411-aa24-061680a0bcbd" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548553 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="626ad741-9147-4903-a245-1728d168ded5" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548562 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f7b344-15e3-4604-bdf0-b40a33752eac" containerName="mariadb-database-create" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548572 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f2316c-ad55-4f35-9063-82708a99e69f" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.548580 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfba12bc-c2c4-46de-9e89-9887d635f4fb" containerName="mariadb-account-create-update" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.549337 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.602280 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-7r8cd"] Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.611485 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.611561 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/708871f1-a26e-4537-8fce-bbaddf86c2e6-kube-api-access-4lgp8\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.611657 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-config\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.611777 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.611902 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-dns-svc\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.811382 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/708871f1-a26e-4537-8fce-bbaddf86c2e6-kube-api-access-4lgp8\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.811499 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-config\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.811579 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.811647 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-dns-svc\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.811700 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.812739 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.814218 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-dns-svc\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.814931 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-config\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.815553 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.834719 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/708871f1-a26e-4537-8fce-bbaddf86c2e6-kube-api-access-4lgp8\") pod \"dnsmasq-dns-554567b4f7-7r8cd\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:33 crc kubenswrapper[4959]: I0121 13:29:33.884599 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:34 crc kubenswrapper[4959]: I0121 13:29:34.327935 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-7r8cd"] Jan 21 13:29:34 crc kubenswrapper[4959]: I0121 13:29:34.954843 4959 generic.go:334] "Generic (PLEG): container finished" podID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerID="508d54e4f80970a7b48d000acc246ea93a4610d91a2a85eb444566b332c0b0cb" exitCode=0 Jan 21 13:29:34 crc kubenswrapper[4959]: I0121 13:29:34.954973 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" event={"ID":"708871f1-a26e-4537-8fce-bbaddf86c2e6","Type":"ContainerDied","Data":"508d54e4f80970a7b48d000acc246ea93a4610d91a2a85eb444566b332c0b0cb"} Jan 21 13:29:34 crc kubenswrapper[4959]: I0121 13:29:34.955159 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" event={"ID":"708871f1-a26e-4537-8fce-bbaddf86c2e6","Type":"ContainerStarted","Data":"2388337fb7cad27c567c6d16a6d850fd864984e1efda146b3544c84573f2a2be"} Jan 21 13:29:35 crc kubenswrapper[4959]: I0121 13:29:35.966780 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" event={"ID":"708871f1-a26e-4537-8fce-bbaddf86c2e6","Type":"ContainerStarted","Data":"eb9c9759a449d5e41a9a0bfc0f4efe6bebfb84ae0ed8874846aa6dbc9921f80e"} Jan 21 13:29:35 crc kubenswrapper[4959]: I0121 13:29:35.967332 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:35 crc kubenswrapper[4959]: I0121 13:29:35.986823 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" podStartSLOduration=2.986803855 podStartE2EDuration="2.986803855s" podCreationTimestamp="2026-01-21 13:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:29:35.98516381 +0000 UTC m=+1236.948194353" watchObservedRunningTime="2026-01-21 13:29:35.986803855 +0000 UTC m=+1236.949834398" Jan 21 13:29:39 crc kubenswrapper[4959]: I0121 13:29:39.996047 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s5xcr" event={"ID":"8abfd29e-86a3-448f-b722-c98d11933e6c","Type":"ContainerStarted","Data":"ae60bbe29467cba8cc3a6045563a7709a663d55754abafc9316996dcd8865834"} Jan 21 13:29:40 crc kubenswrapper[4959]: I0121 13:29:40.013631 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-s5xcr" podStartSLOduration=3.339425523 podStartE2EDuration="30.013616339s" podCreationTimestamp="2026-01-21 13:29:10 +0000 UTC" firstStartedPulling="2026-01-21 13:29:12.388554186 +0000 UTC m=+1213.351584729" lastFinishedPulling="2026-01-21 13:29:39.062745002 +0000 UTC m=+1240.025775545" observedRunningTime="2026-01-21 13:29:40.009948277 +0000 UTC m=+1240.972978820" watchObservedRunningTime="2026-01-21 13:29:40.013616339 +0000 UTC m=+1240.976646882" Jan 21 13:29:43 crc kubenswrapper[4959]: I0121 13:29:43.023496 4959 generic.go:334] "Generic (PLEG): container finished" podID="8abfd29e-86a3-448f-b722-c98d11933e6c" containerID="ae60bbe29467cba8cc3a6045563a7709a663d55754abafc9316996dcd8865834" exitCode=0 Jan 21 13:29:43 crc kubenswrapper[4959]: I0121 13:29:43.023561 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s5xcr" event={"ID":"8abfd29e-86a3-448f-b722-c98d11933e6c","Type":"ContainerDied","Data":"ae60bbe29467cba8cc3a6045563a7709a663d55754abafc9316996dcd8865834"} Jan 21 13:29:43 crc kubenswrapper[4959]: I0121 13:29:43.905147 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:29:43 crc kubenswrapper[4959]: I0121 13:29:43.975929 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hxqm5"] Jan 21 13:29:43 crc kubenswrapper[4959]: I0121 13:29:43.976186 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-hxqm5" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerName="dnsmasq-dns" containerID="cri-o://1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b" gracePeriod=10 Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.438463 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.537806 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.614891 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jtn7\" (UniqueName: \"kubernetes.io/projected/8abfd29e-86a3-448f-b722-c98d11933e6c-kube-api-access-4jtn7\") pod \"8abfd29e-86a3-448f-b722-c98d11933e6c\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.615103 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-combined-ca-bundle\") pod \"8abfd29e-86a3-448f-b722-c98d11933e6c\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.615163 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-config-data\") pod \"8abfd29e-86a3-448f-b722-c98d11933e6c\" (UID: \"8abfd29e-86a3-448f-b722-c98d11933e6c\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.615404 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7m2m\" (UniqueName: \"kubernetes.io/projected/58cda1cd-195e-486d-9d1f-5eee6d6caf21-kube-api-access-s7m2m\") pod \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.615464 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-dns-svc\") pod \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.640369 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8abfd29e-86a3-448f-b722-c98d11933e6c-kube-api-access-4jtn7" (OuterVolumeSpecName: "kube-api-access-4jtn7") pod "8abfd29e-86a3-448f-b722-c98d11933e6c" (UID: "8abfd29e-86a3-448f-b722-c98d11933e6c"). InnerVolumeSpecName "kube-api-access-4jtn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.644394 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cda1cd-195e-486d-9d1f-5eee6d6caf21-kube-api-access-s7m2m" (OuterVolumeSpecName: "kube-api-access-s7m2m") pod "58cda1cd-195e-486d-9d1f-5eee6d6caf21" (UID: "58cda1cd-195e-486d-9d1f-5eee6d6caf21"). InnerVolumeSpecName "kube-api-access-s7m2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.716727 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-config\") pod \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.716769 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-sb\") pod \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.716787 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-nb\") pod \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\" (UID: \"58cda1cd-195e-486d-9d1f-5eee6d6caf21\") " Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.717005 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7m2m\" (UniqueName: \"kubernetes.io/projected/58cda1cd-195e-486d-9d1f-5eee6d6caf21-kube-api-access-s7m2m\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.717015 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jtn7\" (UniqueName: \"kubernetes.io/projected/8abfd29e-86a3-448f-b722-c98d11933e6c-kube-api-access-4jtn7\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.738564 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8abfd29e-86a3-448f-b722-c98d11933e6c" (UID: "8abfd29e-86a3-448f-b722-c98d11933e6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.742065 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58cda1cd-195e-486d-9d1f-5eee6d6caf21" (UID: "58cda1cd-195e-486d-9d1f-5eee6d6caf21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.762249 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-config-data" (OuterVolumeSpecName: "config-data") pod "8abfd29e-86a3-448f-b722-c98d11933e6c" (UID: "8abfd29e-86a3-448f-b722-c98d11933e6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.775969 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-config" (OuterVolumeSpecName: "config") pod "58cda1cd-195e-486d-9d1f-5eee6d6caf21" (UID: "58cda1cd-195e-486d-9d1f-5eee6d6caf21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.793514 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "58cda1cd-195e-486d-9d1f-5eee6d6caf21" (UID: "58cda1cd-195e-486d-9d1f-5eee6d6caf21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.798686 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58cda1cd-195e-486d-9d1f-5eee6d6caf21" (UID: "58cda1cd-195e-486d-9d1f-5eee6d6caf21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.818685 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.818729 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.818742 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.818751 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.818760 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abfd29e-86a3-448f-b722-c98d11933e6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:44 crc kubenswrapper[4959]: I0121 13:29:44.818768 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58cda1cd-195e-486d-9d1f-5eee6d6caf21-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.057633 4959 generic.go:334] "Generic (PLEG): container finished" podID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerID="1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b" exitCode=0 Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.057712 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hxqm5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.057774 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hxqm5" event={"ID":"58cda1cd-195e-486d-9d1f-5eee6d6caf21","Type":"ContainerDied","Data":"1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b"} Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.057861 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hxqm5" event={"ID":"58cda1cd-195e-486d-9d1f-5eee6d6caf21","Type":"ContainerDied","Data":"ad56fdd9cd17c7eb92af360df23ba7773b1cac01f5943e7ebfbdd34776b8c72d"} Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.057887 4959 scope.go:117] "RemoveContainer" containerID="1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.060299 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-s5xcr" event={"ID":"8abfd29e-86a3-448f-b722-c98d11933e6c","Type":"ContainerDied","Data":"9325a6945dbdfde63dbea04ed12753cef52b47c43749fac5205b1e2cb92e1efe"} Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.060328 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9325a6945dbdfde63dbea04ed12753cef52b47c43749fac5205b1e2cb92e1efe" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.060401 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-s5xcr" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.090606 4959 scope.go:117] "RemoveContainer" containerID="19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.159058 4959 scope.go:117] "RemoveContainer" containerID="1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b" Jan 21 13:29:45 crc kubenswrapper[4959]: E0121 13:29:45.159608 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b\": container with ID starting with 1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b not found: ID does not exist" containerID="1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.159718 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b"} err="failed to get container status \"1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b\": rpc error: code = NotFound desc = could not find container \"1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b\": container with ID starting with 1531ef1971520baa839c39e8f28304cee4203398738bf4366717a3b0fcecda3b not found: ID does not exist" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.159812 4959 scope.go:117] "RemoveContainer" containerID="19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145" Jan 21 13:29:45 crc kubenswrapper[4959]: E0121 13:29:45.160394 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145\": container with ID starting with 19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145 not found: ID does not exist" containerID="19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.160431 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145"} err="failed to get container status \"19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145\": rpc error: code = NotFound desc = could not find container \"19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145\": container with ID starting with 19c2ddf5c64a879d64d477a54b56c81fc7f9833c06dc82117b14097a67d49145 not found: ID does not exist" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.164054 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hxqm5"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.174825 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hxqm5"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.278002 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-jctr2"] Jan 21 13:29:45 crc kubenswrapper[4959]: E0121 13:29:45.278452 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerName="init" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.278476 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerName="init" Jan 21 13:29:45 crc kubenswrapper[4959]: E0121 13:29:45.278498 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerName="dnsmasq-dns" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.278506 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerName="dnsmasq-dns" Jan 21 13:29:45 crc kubenswrapper[4959]: E0121 13:29:45.278527 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abfd29e-86a3-448f-b722-c98d11933e6c" containerName="keystone-db-sync" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.278535 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abfd29e-86a3-448f-b722-c98d11933e6c" containerName="keystone-db-sync" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.278739 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" containerName="dnsmasq-dns" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.278765 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abfd29e-86a3-448f-b722-c98d11933e6c" containerName="keystone-db-sync" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.279763 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.299279 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cda1cd-195e-486d-9d1f-5eee6d6caf21" path="/var/lib/kubelet/pods/58cda1cd-195e-486d-9d1f-5eee6d6caf21/volumes" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.299886 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-jctr2"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.339306 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7wd9r"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.342722 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.345802 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.346462 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.346813 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.347019 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.347061 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9w7kg" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.352636 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7wd9r"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.426852 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48n2v\" (UniqueName: \"kubernetes.io/projected/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-kube-api-access-48n2v\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.427040 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-config\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.427160 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.427268 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-dns-svc\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.427347 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.511623 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.521396 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.525679 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.526223 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530567 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-credential-keys\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530622 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-scripts\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530675 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48n2v\" (UniqueName: \"kubernetes.io/projected/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-kube-api-access-48n2v\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530768 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-config\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530797 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-fernet-keys\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530833 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530873 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-combined-ca-bundle\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530912 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-dns-svc\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530946 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chccl\" (UniqueName: \"kubernetes.io/projected/1a9ae10b-fb57-4593-a3cd-0d787431a01c-kube-api-access-chccl\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530970 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-config-data\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.530998 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.531817 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.532617 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-dns-svc\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.535278 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-config\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.536205 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.562179 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48n2v\" (UniqueName: \"kubernetes.io/projected/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-kube-api-access-48n2v\") pod \"dnsmasq-dns-67795cd9-jctr2\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.572827 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.604333 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637027 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-fernet-keys\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637120 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-log-httpd\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637152 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-combined-ca-bundle\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637195 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chccl\" (UniqueName: \"kubernetes.io/projected/1a9ae10b-fb57-4593-a3cd-0d787431a01c-kube-api-access-chccl\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637230 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-config-data\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637290 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637397 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-credential-keys\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637423 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-scripts\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637444 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-scripts\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637465 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-config-data\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637496 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-run-httpd\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637527 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.637556 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5p9\" (UniqueName: \"kubernetes.io/projected/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-kube-api-access-2q5p9\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.649377 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kbs79"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.650643 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.656979 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-npplb" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.657269 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.657479 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.669785 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-r9zsv"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.672794 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-credential-keys\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.672885 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-combined-ca-bundle\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.673141 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-fernet-keys\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.676655 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.679395 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-scripts\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.684620 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5fqfp" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.684988 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-config-data\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.691842 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chccl\" (UniqueName: \"kubernetes.io/projected/1a9ae10b-fb57-4593-a3cd-0d787431a01c-kube-api-access-chccl\") pod \"keystone-bootstrap-7wd9r\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.691911 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vl9c7"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.692670 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.692999 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.707791 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbs79"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.708251 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.708348 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lc5wb" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.708300 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.716997 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-r9zsv"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.728311 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vl9c7"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.740232 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-29kjv"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.741222 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.741883 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.741934 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ac8172e9-2396-4f6b-a632-8e32400aea67-etc-machine-id\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.741970 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-scripts\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.741993 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-config-data\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742039 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-run-httpd\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742089 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742133 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5p9\" (UniqueName: \"kubernetes.io/projected/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-kube-api-access-2q5p9\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742161 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-combined-ca-bundle\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742190 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-scripts\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742211 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-combined-ca-bundle\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742244 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78l65\" (UniqueName: \"kubernetes.io/projected/431f411c-8ae5-42e7-b76a-4ca21314112a-kube-api-access-78l65\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742269 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-db-sync-config-data\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742311 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-config\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742342 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-log-httpd\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742367 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-db-sync-config-data\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742404 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-combined-ca-bundle\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742438 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsm4b\" (UniqueName: \"kubernetes.io/projected/ac8172e9-2396-4f6b-a632-8e32400aea67-kube-api-access-bsm4b\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742476 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plw6p\" (UniqueName: \"kubernetes.io/projected/0582f964-2c47-48a3-9bba-6f169f2a32c8-kube-api-access-plw6p\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.742506 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-config-data\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.743499 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-log-httpd\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.755423 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.755615 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.756174 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-69fqq" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.756381 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-run-httpd\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.759689 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.760779 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.760932 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-config-data\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.763873 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-scripts\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.771335 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-jctr2"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.805349 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5p9\" (UniqueName: \"kubernetes.io/projected/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-kube-api-access-2q5p9\") pod \"ceilometer-0\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.808549 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.810169 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.823997 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-29kjv"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.844848 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-db-sync-config-data\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845060 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-config-data\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845154 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-combined-ca-bundle\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845300 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsm4b\" (UniqueName: \"kubernetes.io/projected/ac8172e9-2396-4f6b-a632-8e32400aea67-kube-api-access-bsm4b\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845390 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plw6p\" (UniqueName: \"kubernetes.io/projected/0582f964-2c47-48a3-9bba-6f169f2a32c8-kube-api-access-plw6p\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845471 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-config-data\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845549 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ac8172e9-2396-4f6b-a632-8e32400aea67-etc-machine-id\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845634 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-combined-ca-bundle\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845716 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-scripts\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845794 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-combined-ca-bundle\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845881 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-scripts\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.845947 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw954\" (UniqueName: \"kubernetes.io/projected/1bbc182d-b589-4402-a7a4-e18453424630-kube-api-access-jw954\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.846029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-combined-ca-bundle\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.846129 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78l65\" (UniqueName: \"kubernetes.io/projected/431f411c-8ae5-42e7-b76a-4ca21314112a-kube-api-access-78l65\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.846569 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbc182d-b589-4402-a7a4-e18453424630-logs\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.846766 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-db-sync-config-data\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.846746 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ac8172e9-2396-4f6b-a632-8e32400aea67-etc-machine-id\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.847558 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-config\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.849342 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.852509 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-db-sync-config-data\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.852841 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-config\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.853051 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-combined-ca-bundle\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.855046 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-combined-ca-bundle\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.861767 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-combined-ca-bundle\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.862200 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-scripts\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.862944 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-config-data\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.868773 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-db-sync-config-data\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.870017 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78l65\" (UniqueName: \"kubernetes.io/projected/431f411c-8ae5-42e7-b76a-4ca21314112a-kube-api-access-78l65\") pod \"neutron-db-sync-vl9c7\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.870019 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plw6p\" (UniqueName: \"kubernetes.io/projected/0582f964-2c47-48a3-9bba-6f169f2a32c8-kube-api-access-plw6p\") pod \"barbican-db-sync-r9zsv\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.871007 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsm4b\" (UniqueName: \"kubernetes.io/projected/ac8172e9-2396-4f6b-a632-8e32400aea67-kube-api-access-bsm4b\") pod \"cinder-db-sync-kbs79\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.877057 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5"] Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.948798 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949137 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw954\" (UniqueName: \"kubernetes.io/projected/1bbc182d-b589-4402-a7a4-e18453424630-kube-api-access-jw954\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949227 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbc182d-b589-4402-a7a4-e18453424630-logs\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949253 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949295 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlr4p\" (UniqueName: \"kubernetes.io/projected/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-kube-api-access-vlr4p\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949313 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-config-data\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949335 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949352 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-config\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949407 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-combined-ca-bundle\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.949433 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-scripts\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.950875 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbc182d-b589-4402-a7a4-e18453424630-logs\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.955952 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-config-data\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.956071 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-scripts\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.956727 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-combined-ca-bundle\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.963319 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:29:45 crc kubenswrapper[4959]: I0121 13:29:45.975025 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw954\" (UniqueName: \"kubernetes.io/projected/1bbc182d-b589-4402-a7a4-e18453424630-kube-api-access-jw954\") pod \"placement-db-sync-29kjv\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.052252 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.052367 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.052514 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlr4p\" (UniqueName: \"kubernetes.io/projected/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-kube-api-access-vlr4p\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.052553 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.052581 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-config\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.053448 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.053515 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.053966 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.054027 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-config\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.072591 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlr4p\" (UniqueName: \"kubernetes.io/projected/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-kube-api-access-vlr4p\") pod \"dnsmasq-dns-5b6dbdb6f5-rdbt5\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.124938 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbs79" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.136719 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:29:46 crc kubenswrapper[4959]: I0121 13:29:46.158543 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:46.202760 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-29kjv" Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:46.214440 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:46.291765 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-jctr2"] Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:46.561854 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:29:47 crc kubenswrapper[4959]: W0121 13:29:46.569528 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda255ee80_1aff_4b3e_a129_5fb11a2edb1b.slice/crio-89ed8a9e55eb7c7ba2cf9713a36a0404bb77e8cf8c2679ccffeada8111f7bc86 WatchSource:0}: Error finding container 89ed8a9e55eb7c7ba2cf9713a36a0404bb77e8cf8c2679ccffeada8111f7bc86: Status 404 returned error can't find the container with id 89ed8a9e55eb7c7ba2cf9713a36a0404bb77e8cf8c2679ccffeada8111f7bc86 Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.090214 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerStarted","Data":"89ed8a9e55eb7c7ba2cf9713a36a0404bb77e8cf8c2679ccffeada8111f7bc86"} Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.092712 4959 generic.go:334] "Generic (PLEG): container finished" podID="459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" containerID="f5ae8d34eb64ba40cc01e9830cd114188eb71ce195c02f4111e4dc18323e0988" exitCode=0 Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.092742 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-jctr2" event={"ID":"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206","Type":"ContainerDied","Data":"f5ae8d34eb64ba40cc01e9830cd114188eb71ce195c02f4111e4dc18323e0988"} Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.092759 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-jctr2" event={"ID":"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206","Type":"ContainerStarted","Data":"dc3a06688f9fed5554cb894d274395eaa4519475be3bd14119c31db65f7724b5"} Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.370714 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7wd9r"] Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.645950 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5"] Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.653851 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vl9c7"] Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.664896 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-r9zsv"] Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.669005 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:47 crc kubenswrapper[4959]: W0121 13:29:47.672140 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8172e9_2396_4f6b_a632_8e32400aea67.slice/crio-8da5f2219f32a338b9636b3ec58a867e5c8a570b36e441265abc7cafd841f630 WatchSource:0}: Error finding container 8da5f2219f32a338b9636b3ec58a867e5c8a570b36e441265abc7cafd841f630: Status 404 returned error can't find the container with id 8da5f2219f32a338b9636b3ec58a867e5c8a570b36e441265abc7cafd841f630 Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.674887 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kbs79"] Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.681856 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-29kjv"] Jan 21 13:29:47 crc kubenswrapper[4959]: W0121 13:29:47.702901 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bbc182d_b589_4402_a7a4_e18453424630.slice/crio-e39a2a9ec8f5509b37f84af7b5e97891d2b6bfaabafb446e7ef5e0e0354c6efe WatchSource:0}: Error finding container e39a2a9ec8f5509b37f84af7b5e97891d2b6bfaabafb446e7ef5e0e0354c6efe: Status 404 returned error can't find the container with id e39a2a9ec8f5509b37f84af7b5e97891d2b6bfaabafb446e7ef5e0e0354c6efe Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.800443 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-dns-svc\") pod \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.800478 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-config\") pod \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.800513 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48n2v\" (UniqueName: \"kubernetes.io/projected/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-kube-api-access-48n2v\") pod \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.800555 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-sb\") pod \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.800583 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-nb\") pod \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\" (UID: \"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206\") " Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.804406 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-kube-api-access-48n2v" (OuterVolumeSpecName: "kube-api-access-48n2v") pod "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" (UID: "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206"). InnerVolumeSpecName "kube-api-access-48n2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.955923 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48n2v\" (UniqueName: \"kubernetes.io/projected/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-kube-api-access-48n2v\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:47 crc kubenswrapper[4959]: I0121 13:29:47.966889 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.002754 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" (UID: "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.007000 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" (UID: "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.021687 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-config" (OuterVolumeSpecName: "config") pod "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" (UID: "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.041844 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" (UID: "459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.058768 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.058807 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.058820 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.058832 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.116672 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-jctr2" event={"ID":"459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206","Type":"ContainerDied","Data":"dc3a06688f9fed5554cb894d274395eaa4519475be3bd14119c31db65f7724b5"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.116743 4959 scope.go:117] "RemoveContainer" containerID="f5ae8d34eb64ba40cc01e9830cd114188eb71ce195c02f4111e4dc18323e0988" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.116859 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-jctr2" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.120431 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7wd9r" event={"ID":"1a9ae10b-fb57-4593-a3cd-0d787431a01c","Type":"ContainerStarted","Data":"087eb15ea62d6c73abf255de8ceb2bffbebc639c9f5f646aef7e24a06d4ee1b8"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.120496 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7wd9r" event={"ID":"1a9ae10b-fb57-4593-a3cd-0d787431a01c","Type":"ContainerStarted","Data":"a60aa43da737c9e1cc561ad957908a6c0f19ee9b8083fdb86b4ea2a10cb4e589"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.126981 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vl9c7" event={"ID":"431f411c-8ae5-42e7-b76a-4ca21314112a","Type":"ContainerStarted","Data":"6eca0ab4c27088f3c38fe7f361931b10251801e2b04d6054f89a520bf0f9338e"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.169336 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" event={"ID":"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad","Type":"ContainerStarted","Data":"522e254b45797834f0bc604611b3e76a9a78e41c0ef728e436d3b116e6100d2f"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.176032 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9zsv" event={"ID":"0582f964-2c47-48a3-9bba-6f169f2a32c8","Type":"ContainerStarted","Data":"56181717882e848cd8d3a708bf66e108fdbcaefdd8dc1e1d3b4a096e30553975"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.182478 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7wd9r" podStartSLOduration=3.182459831 podStartE2EDuration="3.182459831s" podCreationTimestamp="2026-01-21 13:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:29:48.170348996 +0000 UTC m=+1249.133379539" watchObservedRunningTime="2026-01-21 13:29:48.182459831 +0000 UTC m=+1249.145490374" Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.183973 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbs79" event={"ID":"ac8172e9-2396-4f6b-a632-8e32400aea67","Type":"ContainerStarted","Data":"8da5f2219f32a338b9636b3ec58a867e5c8a570b36e441265abc7cafd841f630"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.187004 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-29kjv" event={"ID":"1bbc182d-b589-4402-a7a4-e18453424630","Type":"ContainerStarted","Data":"e39a2a9ec8f5509b37f84af7b5e97891d2b6bfaabafb446e7ef5e0e0354c6efe"} Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.264453 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-jctr2"] Jan 21 13:29:48 crc kubenswrapper[4959]: I0121 13:29:48.273590 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-jctr2"] Jan 21 13:29:49 crc kubenswrapper[4959]: I0121 13:29:49.224733 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vl9c7" event={"ID":"431f411c-8ae5-42e7-b76a-4ca21314112a","Type":"ContainerStarted","Data":"6752eaa93b9b2ea81f03375078c6d56e8911e09dc9579ec6488b009dbf950df8"} Jan 21 13:29:49 crc kubenswrapper[4959]: I0121 13:29:49.232585 4959 generic.go:334] "Generic (PLEG): container finished" podID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerID="9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf" exitCode=0 Jan 21 13:29:49 crc kubenswrapper[4959]: I0121 13:29:49.232646 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" event={"ID":"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad","Type":"ContainerDied","Data":"9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf"} Jan 21 13:29:49 crc kubenswrapper[4959]: I0121 13:29:49.262799 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vl9c7" podStartSLOduration=4.262784668 podStartE2EDuration="4.262784668s" podCreationTimestamp="2026-01-21 13:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:29:49.25888061 +0000 UTC m=+1250.221911163" watchObservedRunningTime="2026-01-21 13:29:49.262784668 +0000 UTC m=+1250.225815211" Jan 21 13:29:49 crc kubenswrapper[4959]: I0121 13:29:49.308356 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" path="/var/lib/kubelet/pods/459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206/volumes" Jan 21 13:29:56 crc kubenswrapper[4959]: I0121 13:29:56.038182 4959 generic.go:334] "Generic (PLEG): container finished" podID="1a9ae10b-fb57-4593-a3cd-0d787431a01c" containerID="087eb15ea62d6c73abf255de8ceb2bffbebc639c9f5f646aef7e24a06d4ee1b8" exitCode=0 Jan 21 13:29:56 crc kubenswrapper[4959]: I0121 13:29:56.038919 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7wd9r" event={"ID":"1a9ae10b-fb57-4593-a3cd-0d787431a01c","Type":"ContainerDied","Data":"087eb15ea62d6c73abf255de8ceb2bffbebc639c9f5f646aef7e24a06d4ee1b8"} Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.156908 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx"] Jan 21 13:30:00 crc kubenswrapper[4959]: E0121 13:30:00.157838 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" containerName="init" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.157856 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" containerName="init" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.158023 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="459b84e7-3ed7-4ec1-9e8c-ebb4b2a66206" containerName="init" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.158690 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.165524 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.165572 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.172037 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx"] Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.251040 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1577aba-031c-4453-88b9-dd1e63e332a1-secret-volume\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.251170 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1577aba-031c-4453-88b9-dd1e63e332a1-config-volume\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.251262 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fjw\" (UniqueName: \"kubernetes.io/projected/f1577aba-031c-4453-88b9-dd1e63e332a1-kube-api-access-n4fjw\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.352186 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4fjw\" (UniqueName: \"kubernetes.io/projected/f1577aba-031c-4453-88b9-dd1e63e332a1-kube-api-access-n4fjw\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.352272 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1577aba-031c-4453-88b9-dd1e63e332a1-secret-volume\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.352309 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1577aba-031c-4453-88b9-dd1e63e332a1-config-volume\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.353207 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1577aba-031c-4453-88b9-dd1e63e332a1-config-volume\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.364521 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1577aba-031c-4453-88b9-dd1e63e332a1-secret-volume\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.379499 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4fjw\" (UniqueName: \"kubernetes.io/projected/f1577aba-031c-4453-88b9-dd1e63e332a1-kube-api-access-n4fjw\") pod \"collect-profiles-29483370-q8pnx\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:00 crc kubenswrapper[4959]: I0121 13:30:00.503156 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:05 crc kubenswrapper[4959]: E0121 13:30:05.153776 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 13:30:05 crc kubenswrapper[4959]: E0121 13:30:05.154281 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plw6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-r9zsv_openstack(0582f964-2c47-48a3-9bba-6f169f2a32c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:30:05 crc kubenswrapper[4959]: E0121 13:30:05.155447 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-r9zsv" podUID="0582f964-2c47-48a3-9bba-6f169f2a32c8" Jan 21 13:30:05 crc kubenswrapper[4959]: E0121 13:30:05.289848 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-r9zsv" podUID="0582f964-2c47-48a3-9bba-6f169f2a32c8" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.379144 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:30:05 crc kubenswrapper[4959]: E0121 13:30:05.528474 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 21 13:30:05 crc kubenswrapper[4959]: E0121 13:30:05.528677 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59ch549h58bh9bh67ch54fh688h686h57h689h5dh8bh7fh89h5ch74h57bh66ch57bh579h55fh545hbfh657h5c5h675h694hdch57ch545h646h56q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2q5p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a255ee80-1aff-4b3e-a129-5fb11a2edb1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.555933 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-combined-ca-bundle\") pod \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.555986 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chccl\" (UniqueName: \"kubernetes.io/projected/1a9ae10b-fb57-4593-a3cd-0d787431a01c-kube-api-access-chccl\") pod \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.556154 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-credential-keys\") pod \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.556224 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-scripts\") pod \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.556259 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-fernet-keys\") pod \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.556321 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-config-data\") pod \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\" (UID: \"1a9ae10b-fb57-4593-a3cd-0d787431a01c\") " Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.574367 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1a9ae10b-fb57-4593-a3cd-0d787431a01c" (UID: "1a9ae10b-fb57-4593-a3cd-0d787431a01c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.574398 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9ae10b-fb57-4593-a3cd-0d787431a01c-kube-api-access-chccl" (OuterVolumeSpecName: "kube-api-access-chccl") pod "1a9ae10b-fb57-4593-a3cd-0d787431a01c" (UID: "1a9ae10b-fb57-4593-a3cd-0d787431a01c"). InnerVolumeSpecName "kube-api-access-chccl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.574428 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-scripts" (OuterVolumeSpecName: "scripts") pod "1a9ae10b-fb57-4593-a3cd-0d787431a01c" (UID: "1a9ae10b-fb57-4593-a3cd-0d787431a01c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.574495 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1a9ae10b-fb57-4593-a3cd-0d787431a01c" (UID: "1a9ae10b-fb57-4593-a3cd-0d787431a01c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.585327 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a9ae10b-fb57-4593-a3cd-0d787431a01c" (UID: "1a9ae10b-fb57-4593-a3cd-0d787431a01c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.586179 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-config-data" (OuterVolumeSpecName: "config-data") pod "1a9ae10b-fb57-4593-a3cd-0d787431a01c" (UID: "1a9ae10b-fb57-4593-a3cd-0d787431a01c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.658213 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.658249 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chccl\" (UniqueName: \"kubernetes.io/projected/1a9ae10b-fb57-4593-a3cd-0d787431a01c-kube-api-access-chccl\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.658261 4959 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.658270 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.658279 4959 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:05 crc kubenswrapper[4959]: I0121 13:30:05.658286 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9ae10b-fb57-4593-a3cd-0d787431a01c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.291748 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7wd9r" event={"ID":"1a9ae10b-fb57-4593-a3cd-0d787431a01c","Type":"ContainerDied","Data":"a60aa43da737c9e1cc561ad957908a6c0f19ee9b8083fdb86b4ea2a10cb4e589"} Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.291799 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7wd9r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.291837 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a60aa43da737c9e1cc561ad957908a6c0f19ee9b8083fdb86b4ea2a10cb4e589" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.458635 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7wd9r"] Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.466643 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7wd9r"] Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.565516 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zdr5r"] Jan 21 13:30:06 crc kubenswrapper[4959]: E0121 13:30:06.565852 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9ae10b-fb57-4593-a3cd-0d787431a01c" containerName="keystone-bootstrap" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.565869 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9ae10b-fb57-4593-a3cd-0d787431a01c" containerName="keystone-bootstrap" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.566022 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9ae10b-fb57-4593-a3cd-0d787431a01c" containerName="keystone-bootstrap" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.566759 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.570193 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.570304 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.570551 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9w7kg" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.570659 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.570741 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.579514 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zdr5r"] Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.674126 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-fernet-keys\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.674203 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-config-data\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.674263 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh99h\" (UniqueName: \"kubernetes.io/projected/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-kube-api-access-bh99h\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.674421 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-credential-keys\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.674495 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-scripts\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.674562 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-combined-ca-bundle\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.775525 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-combined-ca-bundle\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.775619 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-fernet-keys\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.775646 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-config-data\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.775674 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh99h\" (UniqueName: \"kubernetes.io/projected/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-kube-api-access-bh99h\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.775708 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-credential-keys\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.775740 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-scripts\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.782170 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-config-data\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.785818 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-combined-ca-bundle\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:06 crc kubenswrapper[4959]: I0121 13:30:06.786685 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-credential-keys\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:07 crc kubenswrapper[4959]: I0121 13:30:07.124484 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-fernet-keys\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:07 crc kubenswrapper[4959]: I0121 13:30:07.310957 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-scripts\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:07 crc kubenswrapper[4959]: I0121 13:30:07.311308 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh99h\" (UniqueName: \"kubernetes.io/projected/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-kube-api-access-bh99h\") pod \"keystone-bootstrap-zdr5r\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:07 crc kubenswrapper[4959]: I0121 13:30:07.322793 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9ae10b-fb57-4593-a3cd-0d787431a01c" path="/var/lib/kubelet/pods/1a9ae10b-fb57-4593-a3cd-0d787431a01c/volumes" Jan 21 13:30:07 crc kubenswrapper[4959]: I0121 13:30:07.493001 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:21 crc kubenswrapper[4959]: I0121 13:30:21.379695 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:30:21 crc kubenswrapper[4959]: I0121 13:30:21.380315 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:30:23 crc kubenswrapper[4959]: E0121 13:30:23.213990 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 13:30:23 crc kubenswrapper[4959]: E0121 13:30:23.214824 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bsm4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kbs79_openstack(ac8172e9-2396-4f6b-a632-8e32400aea67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:30:23 crc kubenswrapper[4959]: E0121 13:30:23.216015 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kbs79" podUID="ac8172e9-2396-4f6b-a632-8e32400aea67" Jan 21 13:30:23 crc kubenswrapper[4959]: I0121 13:30:23.347275 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zdr5r"] Jan 21 13:30:23 crc kubenswrapper[4959]: I0121 13:30:23.445751 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx"] Jan 21 13:30:23 crc kubenswrapper[4959]: I0121 13:30:23.455648 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" event={"ID":"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad","Type":"ContainerStarted","Data":"c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8"} Jan 21 13:30:23 crc kubenswrapper[4959]: I0121 13:30:23.457302 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-29kjv" event={"ID":"1bbc182d-b589-4402-a7a4-e18453424630","Type":"ContainerStarted","Data":"b111740e5beee284477df938d7c9bd06c39614e4c65951e34b440230ae99bf60"} Jan 21 13:30:23 crc kubenswrapper[4959]: I0121 13:30:23.478060 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-29kjv" podStartSLOduration=3.194524666 podStartE2EDuration="38.478039956s" podCreationTimestamp="2026-01-21 13:29:45 +0000 UTC" firstStartedPulling="2026-01-21 13:29:47.718536792 +0000 UTC m=+1248.681567335" lastFinishedPulling="2026-01-21 13:30:23.002052082 +0000 UTC m=+1283.965082625" observedRunningTime="2026-01-21 13:30:23.469480259 +0000 UTC m=+1284.432510812" watchObservedRunningTime="2026-01-21 13:30:23.478039956 +0000 UTC m=+1284.441070499" Jan 21 13:30:23 crc kubenswrapper[4959]: E0121 13:30:23.612794 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kbs79" podUID="ac8172e9-2396-4f6b-a632-8e32400aea67" Jan 21 13:30:24 crc kubenswrapper[4959]: I0121 13:30:24.467387 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdr5r" event={"ID":"c681a0f6-3130-46d9-8a0e-9c27ae2a3171","Type":"ContainerStarted","Data":"44cd4f18cae36dac84e3114bd7432adef7952c9b66f72d87f9a12404d0403310"} Jan 21 13:30:24 crc kubenswrapper[4959]: I0121 13:30:24.468542 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" event={"ID":"f1577aba-031c-4453-88b9-dd1e63e332a1","Type":"ContainerStarted","Data":"75599388e498c078934e28a661c1e11975de93ae2cd2f46700f648192111dc12"} Jan 21 13:30:24 crc kubenswrapper[4959]: I0121 13:30:24.468941 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:30:24 crc kubenswrapper[4959]: I0121 13:30:24.502496 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" podStartSLOduration=39.502475256 podStartE2EDuration="39.502475256s" podCreationTimestamp="2026-01-21 13:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:24.500665116 +0000 UTC m=+1285.463695669" watchObservedRunningTime="2026-01-21 13:30:24.502475256 +0000 UTC m=+1285.465505789" Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.475887 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9zsv" event={"ID":"0582f964-2c47-48a3-9bba-6f169f2a32c8","Type":"ContainerStarted","Data":"4422224cf5f417b5d2257f2e2bb002a11b30e2bc8d21daa03c7421455d557f80"} Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.479036 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdr5r" event={"ID":"c681a0f6-3130-46d9-8a0e-9c27ae2a3171","Type":"ContainerStarted","Data":"dcafc570377a5e7f90897c01c08185d803b8c72c10f1127baab477c9febb0df6"} Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.481557 4959 generic.go:334] "Generic (PLEG): container finished" podID="f1577aba-031c-4453-88b9-dd1e63e332a1" containerID="fdf3de6306960ea44c0741d9ac16a35f697138e43a129dccd6c6f595082d3987" exitCode=0 Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.481620 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" event={"ID":"f1577aba-031c-4453-88b9-dd1e63e332a1","Type":"ContainerDied","Data":"fdf3de6306960ea44c0741d9ac16a35f697138e43a129dccd6c6f595082d3987"} Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.484119 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerStarted","Data":"af2cc15ba9e953f809168b2e7b6230721e4fe72b1eb8e272e8e2ef444e1bd5c0"} Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.494214 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-r9zsv" podStartSLOduration=3.6847437530000002 podStartE2EDuration="40.494195643s" podCreationTimestamp="2026-01-21 13:29:45 +0000 UTC" firstStartedPulling="2026-01-21 13:29:47.67000694 +0000 UTC m=+1248.633037483" lastFinishedPulling="2026-01-21 13:30:24.47945879 +0000 UTC m=+1285.442489373" observedRunningTime="2026-01-21 13:30:25.493400981 +0000 UTC m=+1286.456431524" watchObservedRunningTime="2026-01-21 13:30:25.494195643 +0000 UTC m=+1286.457226186" Jan 21 13:30:25 crc kubenswrapper[4959]: I0121 13:30:25.524801 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zdr5r" podStartSLOduration=19.524755698 podStartE2EDuration="19.524755698s" podCreationTimestamp="2026-01-21 13:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:25.513178028 +0000 UTC m=+1286.476208571" watchObservedRunningTime="2026-01-21 13:30:25.524755698 +0000 UTC m=+1286.487786241" Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.828712 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.978457 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1577aba-031c-4453-88b9-dd1e63e332a1-config-volume\") pod \"f1577aba-031c-4453-88b9-dd1e63e332a1\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.978572 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4fjw\" (UniqueName: \"kubernetes.io/projected/f1577aba-031c-4453-88b9-dd1e63e332a1-kube-api-access-n4fjw\") pod \"f1577aba-031c-4453-88b9-dd1e63e332a1\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.978624 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1577aba-031c-4453-88b9-dd1e63e332a1-secret-volume\") pod \"f1577aba-031c-4453-88b9-dd1e63e332a1\" (UID: \"f1577aba-031c-4453-88b9-dd1e63e332a1\") " Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.979674 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1577aba-031c-4453-88b9-dd1e63e332a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f1577aba-031c-4453-88b9-dd1e63e332a1" (UID: "f1577aba-031c-4453-88b9-dd1e63e332a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.995210 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1577aba-031c-4453-88b9-dd1e63e332a1-kube-api-access-n4fjw" (OuterVolumeSpecName: "kube-api-access-n4fjw") pod "f1577aba-031c-4453-88b9-dd1e63e332a1" (UID: "f1577aba-031c-4453-88b9-dd1e63e332a1"). InnerVolumeSpecName "kube-api-access-n4fjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:26 crc kubenswrapper[4959]: I0121 13:30:26.998995 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1577aba-031c-4453-88b9-dd1e63e332a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f1577aba-031c-4453-88b9-dd1e63e332a1" (UID: "f1577aba-031c-4453-88b9-dd1e63e332a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:27 crc kubenswrapper[4959]: I0121 13:30:27.079509 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4fjw\" (UniqueName: \"kubernetes.io/projected/f1577aba-031c-4453-88b9-dd1e63e332a1-kube-api-access-n4fjw\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:27 crc kubenswrapper[4959]: I0121 13:30:27.079546 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1577aba-031c-4453-88b9-dd1e63e332a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:27 crc kubenswrapper[4959]: I0121 13:30:27.079554 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1577aba-031c-4453-88b9-dd1e63e332a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:27 crc kubenswrapper[4959]: I0121 13:30:27.502150 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" event={"ID":"f1577aba-031c-4453-88b9-dd1e63e332a1","Type":"ContainerDied","Data":"75599388e498c078934e28a661c1e11975de93ae2cd2f46700f648192111dc12"} Jan 21 13:30:27 crc kubenswrapper[4959]: I0121 13:30:27.502207 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75599388e498c078934e28a661c1e11975de93ae2cd2f46700f648192111dc12" Jan 21 13:30:27 crc kubenswrapper[4959]: I0121 13:30:27.502228 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx" Jan 21 13:30:31 crc kubenswrapper[4959]: I0121 13:30:31.217300 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:30:31 crc kubenswrapper[4959]: I0121 13:30:31.276113 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-7r8cd"] Jan 21 13:30:31 crc kubenswrapper[4959]: I0121 13:30:31.276326 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="dnsmasq-dns" containerID="cri-o://eb9c9759a449d5e41a9a0bfc0f4efe6bebfb84ae0ed8874846aa6dbc9921f80e" gracePeriod=10 Jan 21 13:30:31 crc kubenswrapper[4959]: I0121 13:30:31.566213 4959 generic.go:334] "Generic (PLEG): container finished" podID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerID="eb9c9759a449d5e41a9a0bfc0f4efe6bebfb84ae0ed8874846aa6dbc9921f80e" exitCode=0 Jan 21 13:30:31 crc kubenswrapper[4959]: I0121 13:30:31.566255 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" event={"ID":"708871f1-a26e-4537-8fce-bbaddf86c2e6","Type":"ContainerDied","Data":"eb9c9759a449d5e41a9a0bfc0f4efe6bebfb84ae0ed8874846aa6dbc9921f80e"} Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.584742 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.588229 4959 generic.go:334] "Generic (PLEG): container finished" podID="c681a0f6-3130-46d9-8a0e-9c27ae2a3171" containerID="dcafc570377a5e7f90897c01c08185d803b8c72c10f1127baab477c9febb0df6" exitCode=0 Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.588317 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdr5r" event={"ID":"c681a0f6-3130-46d9-8a0e-9c27ae2a3171","Type":"ContainerDied","Data":"dcafc570377a5e7f90897c01c08185d803b8c72c10f1127baab477c9febb0df6"} Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.590078 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" event={"ID":"708871f1-a26e-4537-8fce-bbaddf86c2e6","Type":"ContainerDied","Data":"2388337fb7cad27c567c6d16a6d850fd864984e1efda146b3544c84573f2a2be"} Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.590142 4959 scope.go:117] "RemoveContainer" containerID="eb9c9759a449d5e41a9a0bfc0f4efe6bebfb84ae0ed8874846aa6dbc9921f80e" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.590172 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.611656 4959 scope.go:117] "RemoveContainer" containerID="508d54e4f80970a7b48d000acc246ea93a4610d91a2a85eb444566b332c0b0cb" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.735153 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-dns-svc\") pod \"708871f1-a26e-4537-8fce-bbaddf86c2e6\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.735274 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-config\") pod \"708871f1-a26e-4537-8fce-bbaddf86c2e6\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.735312 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/708871f1-a26e-4537-8fce-bbaddf86c2e6-kube-api-access-4lgp8\") pod \"708871f1-a26e-4537-8fce-bbaddf86c2e6\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.735403 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-sb\") pod \"708871f1-a26e-4537-8fce-bbaddf86c2e6\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.735453 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-nb\") pod \"708871f1-a26e-4537-8fce-bbaddf86c2e6\" (UID: \"708871f1-a26e-4537-8fce-bbaddf86c2e6\") " Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.741770 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708871f1-a26e-4537-8fce-bbaddf86c2e6-kube-api-access-4lgp8" (OuterVolumeSpecName: "kube-api-access-4lgp8") pod "708871f1-a26e-4537-8fce-bbaddf86c2e6" (UID: "708871f1-a26e-4537-8fce-bbaddf86c2e6"). InnerVolumeSpecName "kube-api-access-4lgp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.774254 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-config" (OuterVolumeSpecName: "config") pod "708871f1-a26e-4537-8fce-bbaddf86c2e6" (UID: "708871f1-a26e-4537-8fce-bbaddf86c2e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.774989 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "708871f1-a26e-4537-8fce-bbaddf86c2e6" (UID: "708871f1-a26e-4537-8fce-bbaddf86c2e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.775014 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "708871f1-a26e-4537-8fce-bbaddf86c2e6" (UID: "708871f1-a26e-4537-8fce-bbaddf86c2e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.777300 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "708871f1-a26e-4537-8fce-bbaddf86c2e6" (UID: "708871f1-a26e-4537-8fce-bbaddf86c2e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.837118 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.837179 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.837188 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lgp8\" (UniqueName: \"kubernetes.io/projected/708871f1-a26e-4537-8fce-bbaddf86c2e6-kube-api-access-4lgp8\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.837200 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.837209 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708871f1-a26e-4537-8fce-bbaddf86c2e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.922803 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-7r8cd"] Jan 21 13:30:34 crc kubenswrapper[4959]: I0121 13:30:34.928979 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-7r8cd"] Jan 21 13:30:35 crc kubenswrapper[4959]: I0121 13:30:35.294988 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" path="/var/lib/kubelet/pods/708871f1-a26e-4537-8fce-bbaddf86c2e6/volumes" Jan 21 13:30:35 crc kubenswrapper[4959]: I0121 13:30:35.905290 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.057746 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-fernet-keys\") pod \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.057834 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-credential-keys\") pod \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.057936 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-combined-ca-bundle\") pod \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.058003 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-config-data\") pod \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.058037 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-scripts\") pod \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.058111 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh99h\" (UniqueName: \"kubernetes.io/projected/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-kube-api-access-bh99h\") pod \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\" (UID: \"c681a0f6-3130-46d9-8a0e-9c27ae2a3171\") " Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.062371 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c681a0f6-3130-46d9-8a0e-9c27ae2a3171" (UID: "c681a0f6-3130-46d9-8a0e-9c27ae2a3171"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.062426 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-scripts" (OuterVolumeSpecName: "scripts") pod "c681a0f6-3130-46d9-8a0e-9c27ae2a3171" (UID: "c681a0f6-3130-46d9-8a0e-9c27ae2a3171"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.063290 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c681a0f6-3130-46d9-8a0e-9c27ae2a3171" (UID: "c681a0f6-3130-46d9-8a0e-9c27ae2a3171"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.065304 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-kube-api-access-bh99h" (OuterVolumeSpecName: "kube-api-access-bh99h") pod "c681a0f6-3130-46d9-8a0e-9c27ae2a3171" (UID: "c681a0f6-3130-46d9-8a0e-9c27ae2a3171"). InnerVolumeSpecName "kube-api-access-bh99h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.087275 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c681a0f6-3130-46d9-8a0e-9c27ae2a3171" (UID: "c681a0f6-3130-46d9-8a0e-9c27ae2a3171"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.096340 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-config-data" (OuterVolumeSpecName: "config-data") pod "c681a0f6-3130-46d9-8a0e-9c27ae2a3171" (UID: "c681a0f6-3130-46d9-8a0e-9c27ae2a3171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.160944 4959 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.161016 4959 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.161041 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.161069 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.161092 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.161152 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh99h\" (UniqueName: \"kubernetes.io/projected/c681a0f6-3130-46d9-8a0e-9c27ae2a3171-kube-api-access-bh99h\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.608307 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerStarted","Data":"366378b663f14f9c840bf2894f5baea947412252098c7d4766037beb252dbb10"} Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.609537 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdr5r" event={"ID":"c681a0f6-3130-46d9-8a0e-9c27ae2a3171","Type":"ContainerDied","Data":"44cd4f18cae36dac84e3114bd7432adef7952c9b66f72d87f9a12404d0403310"} Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.609567 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cd4f18cae36dac84e3114bd7432adef7952c9b66f72d87f9a12404d0403310" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.609592 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdr5r" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829164 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cf9fc5bf6-zcb4m"] Jan 21 13:30:36 crc kubenswrapper[4959]: E0121 13:30:36.829522 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1577aba-031c-4453-88b9-dd1e63e332a1" containerName="collect-profiles" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829539 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1577aba-031c-4453-88b9-dd1e63e332a1" containerName="collect-profiles" Jan 21 13:30:36 crc kubenswrapper[4959]: E0121 13:30:36.829547 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c681a0f6-3130-46d9-8a0e-9c27ae2a3171" containerName="keystone-bootstrap" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829556 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c681a0f6-3130-46d9-8a0e-9c27ae2a3171" containerName="keystone-bootstrap" Jan 21 13:30:36 crc kubenswrapper[4959]: E0121 13:30:36.829577 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="dnsmasq-dns" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829583 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="dnsmasq-dns" Jan 21 13:30:36 crc kubenswrapper[4959]: E0121 13:30:36.829593 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="init" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829602 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="init" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829792 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1577aba-031c-4453-88b9-dd1e63e332a1" containerName="collect-profiles" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829810 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="dnsmasq-dns" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.829821 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c681a0f6-3130-46d9-8a0e-9c27ae2a3171" containerName="keystone-bootstrap" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.830375 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.832471 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.835939 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.836001 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.836994 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9w7kg" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.839391 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.840942 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.848921 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf9fc5bf6-zcb4m"] Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974290 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25fxp\" (UniqueName: \"kubernetes.io/projected/2f12325a-947b-48c4-af78-286eec8a25f8-kube-api-access-25fxp\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974343 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-fernet-keys\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974384 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-scripts\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974432 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-public-tls-certs\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974486 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-internal-tls-certs\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974533 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-credential-keys\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974575 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-config-data\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:36 crc kubenswrapper[4959]: I0121 13:30:36.974602 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-combined-ca-bundle\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075202 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-internal-tls-certs\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075447 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-credential-keys\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075577 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-config-data\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075672 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-combined-ca-bundle\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075778 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25fxp\" (UniqueName: \"kubernetes.io/projected/2f12325a-947b-48c4-af78-286eec8a25f8-kube-api-access-25fxp\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075889 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-fernet-keys\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.075968 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-scripts\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.076074 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-public-tls-certs\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.080359 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-credential-keys\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.080530 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-combined-ca-bundle\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.080566 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-fernet-keys\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.080770 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-scripts\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.081303 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-internal-tls-certs\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.081828 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-public-tls-certs\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.082359 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f12325a-947b-48c4-af78-286eec8a25f8-config-data\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.093857 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25fxp\" (UniqueName: \"kubernetes.io/projected/2f12325a-947b-48c4-af78-286eec8a25f8-kube-api-access-25fxp\") pod \"keystone-6cf9fc5bf6-zcb4m\" (UID: \"2f12325a-947b-48c4-af78-286eec8a25f8\") " pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:37 crc kubenswrapper[4959]: I0121 13:30:37.148815 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:38 crc kubenswrapper[4959]: I0121 13:30:38.264915 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf9fc5bf6-zcb4m"] Jan 21 13:30:38 crc kubenswrapper[4959]: I0121 13:30:38.885308 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-7r8cd" podUID="708871f1-a26e-4537-8fce-bbaddf86c2e6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 13:30:38 crc kubenswrapper[4959]: I0121 13:30:38.887459 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf9fc5bf6-zcb4m" event={"ID":"2f12325a-947b-48c4-af78-286eec8a25f8","Type":"ContainerStarted","Data":"801165d947d20f087da55aa98b8f122ea37b4d7f885515e8c11ebbeb20e3050a"} Jan 21 13:30:39 crc kubenswrapper[4959]: I0121 13:30:39.895158 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf9fc5bf6-zcb4m" event={"ID":"2f12325a-947b-48c4-af78-286eec8a25f8","Type":"ContainerStarted","Data":"c72d982e5738acfc76bdbe2a13c6bec251187af347b280f588e038344bccd75b"} Jan 21 13:30:39 crc kubenswrapper[4959]: I0121 13:30:39.895409 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:30:39 crc kubenswrapper[4959]: I0121 13:30:39.919564 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cf9fc5bf6-zcb4m" podStartSLOduration=3.919546002 podStartE2EDuration="3.919546002s" podCreationTimestamp="2026-01-21 13:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:39.912119577 +0000 UTC m=+1300.875150140" watchObservedRunningTime="2026-01-21 13:30:39.919546002 +0000 UTC m=+1300.882576555" Jan 21 13:30:49 crc kubenswrapper[4959]: E0121 13:30:49.943142 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" Jan 21 13:30:50 crc kubenswrapper[4959]: I0121 13:30:50.023763 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerStarted","Data":"9cc743a644d5af825c0234a52783e853d29a62846daa40376bbb917d33b60a1f"} Jan 21 13:30:50 crc kubenswrapper[4959]: I0121 13:30:50.023954 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="ceilometer-notification-agent" containerID="cri-o://af2cc15ba9e953f809168b2e7b6230721e4fe72b1eb8e272e8e2ef444e1bd5c0" gracePeriod=30 Jan 21 13:30:50 crc kubenswrapper[4959]: I0121 13:30:50.024046 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="sg-core" containerID="cri-o://366378b663f14f9c840bf2894f5baea947412252098c7d4766037beb252dbb10" gracePeriod=30 Jan 21 13:30:50 crc kubenswrapper[4959]: I0121 13:30:50.024164 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="proxy-httpd" containerID="cri-o://9cc743a644d5af825c0234a52783e853d29a62846daa40376bbb917d33b60a1f" gracePeriod=30 Jan 21 13:30:50 crc kubenswrapper[4959]: I0121 13:30:50.023976 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.034080 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbs79" event={"ID":"ac8172e9-2396-4f6b-a632-8e32400aea67","Type":"ContainerStarted","Data":"d4f2eee4ac0b3985c4aa47ece6bc98dc713f27db40b41de0b80d6acdf1cbdc3d"} Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.035473 4959 generic.go:334] "Generic (PLEG): container finished" podID="1bbc182d-b589-4402-a7a4-e18453424630" containerID="b111740e5beee284477df938d7c9bd06c39614e4c65951e34b440230ae99bf60" exitCode=0 Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.035555 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-29kjv" event={"ID":"1bbc182d-b589-4402-a7a4-e18453424630","Type":"ContainerDied","Data":"b111740e5beee284477df938d7c9bd06c39614e4c65951e34b440230ae99bf60"} Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.037994 4959 generic.go:334] "Generic (PLEG): container finished" podID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerID="366378b663f14f9c840bf2894f5baea947412252098c7d4766037beb252dbb10" exitCode=2 Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.038029 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerDied","Data":"366378b663f14f9c840bf2894f5baea947412252098c7d4766037beb252dbb10"} Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.060123 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kbs79" podStartSLOduration=4.118897839 podStartE2EDuration="1m6.060087178s" podCreationTimestamp="2026-01-21 13:29:45 +0000 UTC" firstStartedPulling="2026-01-21 13:29:47.679358958 +0000 UTC m=+1248.642389501" lastFinishedPulling="2026-01-21 13:30:49.620548297 +0000 UTC m=+1310.583578840" observedRunningTime="2026-01-21 13:30:51.053782764 +0000 UTC m=+1312.016813307" watchObservedRunningTime="2026-01-21 13:30:51.060087178 +0000 UTC m=+1312.023117721" Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.380378 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:30:51 crc kubenswrapper[4959]: I0121 13:30:51.380435 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.049376 4959 generic.go:334] "Generic (PLEG): container finished" podID="0582f964-2c47-48a3-9bba-6f169f2a32c8" containerID="4422224cf5f417b5d2257f2e2bb002a11b30e2bc8d21daa03c7421455d557f80" exitCode=0 Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.049459 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9zsv" event={"ID":"0582f964-2c47-48a3-9bba-6f169f2a32c8","Type":"ContainerDied","Data":"4422224cf5f417b5d2257f2e2bb002a11b30e2bc8d21daa03c7421455d557f80"} Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.052344 4959 generic.go:334] "Generic (PLEG): container finished" podID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerID="af2cc15ba9e953f809168b2e7b6230721e4fe72b1eb8e272e8e2ef444e1bd5c0" exitCode=0 Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.052418 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerDied","Data":"af2cc15ba9e953f809168b2e7b6230721e4fe72b1eb8e272e8e2ef444e1bd5c0"} Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.365766 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-29kjv" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.403678 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbc182d-b589-4402-a7a4-e18453424630-logs\") pod \"1bbc182d-b589-4402-a7a4-e18453424630\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.403827 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw954\" (UniqueName: \"kubernetes.io/projected/1bbc182d-b589-4402-a7a4-e18453424630-kube-api-access-jw954\") pod \"1bbc182d-b589-4402-a7a4-e18453424630\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.403887 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-combined-ca-bundle\") pod \"1bbc182d-b589-4402-a7a4-e18453424630\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.403915 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-config-data\") pod \"1bbc182d-b589-4402-a7a4-e18453424630\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.403986 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-scripts\") pod \"1bbc182d-b589-4402-a7a4-e18453424630\" (UID: \"1bbc182d-b589-4402-a7a4-e18453424630\") " Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.404259 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bbc182d-b589-4402-a7a4-e18453424630-logs" (OuterVolumeSpecName: "logs") pod "1bbc182d-b589-4402-a7a4-e18453424630" (UID: "1bbc182d-b589-4402-a7a4-e18453424630"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.404687 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bbc182d-b589-4402-a7a4-e18453424630-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.410513 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-scripts" (OuterVolumeSpecName: "scripts") pod "1bbc182d-b589-4402-a7a4-e18453424630" (UID: "1bbc182d-b589-4402-a7a4-e18453424630"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.410532 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbc182d-b589-4402-a7a4-e18453424630-kube-api-access-jw954" (OuterVolumeSpecName: "kube-api-access-jw954") pod "1bbc182d-b589-4402-a7a4-e18453424630" (UID: "1bbc182d-b589-4402-a7a4-e18453424630"). InnerVolumeSpecName "kube-api-access-jw954". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.433480 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-config-data" (OuterVolumeSpecName: "config-data") pod "1bbc182d-b589-4402-a7a4-e18453424630" (UID: "1bbc182d-b589-4402-a7a4-e18453424630"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.434522 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bbc182d-b589-4402-a7a4-e18453424630" (UID: "1bbc182d-b589-4402-a7a4-e18453424630"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.506489 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw954\" (UniqueName: \"kubernetes.io/projected/1bbc182d-b589-4402-a7a4-e18453424630-kube-api-access-jw954\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.506833 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.506847 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:52 crc kubenswrapper[4959]: I0121 13:30:52.506859 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbc182d-b589-4402-a7a4-e18453424630-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.061645 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-29kjv" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.061645 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-29kjv" event={"ID":"1bbc182d-b589-4402-a7a4-e18453424630","Type":"ContainerDied","Data":"e39a2a9ec8f5509b37f84af7b5e97891d2b6bfaabafb446e7ef5e0e0354c6efe"} Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.061706 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e39a2a9ec8f5509b37f84af7b5e97891d2b6bfaabafb446e7ef5e0e0354c6efe" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.269004 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5cb6d76584-4n6sf"] Jan 21 13:30:53 crc kubenswrapper[4959]: E0121 13:30:53.269410 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbc182d-b589-4402-a7a4-e18453424630" containerName="placement-db-sync" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.269431 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbc182d-b589-4402-a7a4-e18453424630" containerName="placement-db-sync" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.269644 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbc182d-b589-4402-a7a4-e18453424630" containerName="placement-db-sync" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.270866 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.275428 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.275738 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.275819 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.275824 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.276172 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-69fqq" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.310004 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cb6d76584-4n6sf"] Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.320527 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvv8\" (UniqueName: \"kubernetes.io/projected/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-kube-api-access-2cvv8\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.321005 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-logs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.321162 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-combined-ca-bundle\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.321251 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-internal-tls-certs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.321351 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-public-tls-certs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.321588 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-scripts\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.321764 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-config-data\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.323764 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.422981 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-db-sync-config-data\") pod \"0582f964-2c47-48a3-9bba-6f169f2a32c8\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423122 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-combined-ca-bundle\") pod \"0582f964-2c47-48a3-9bba-6f169f2a32c8\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423166 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plw6p\" (UniqueName: \"kubernetes.io/projected/0582f964-2c47-48a3-9bba-6f169f2a32c8-kube-api-access-plw6p\") pod \"0582f964-2c47-48a3-9bba-6f169f2a32c8\" (UID: \"0582f964-2c47-48a3-9bba-6f169f2a32c8\") " Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423401 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-combined-ca-bundle\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423448 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-internal-tls-certs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423475 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-public-tls-certs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423541 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-scripts\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423595 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-config-data\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423622 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cvv8\" (UniqueName: \"kubernetes.io/projected/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-kube-api-access-2cvv8\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.423645 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-logs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.424446 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-logs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.427840 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0582f964-2c47-48a3-9bba-6f169f2a32c8-kube-api-access-plw6p" (OuterVolumeSpecName: "kube-api-access-plw6p") pod "0582f964-2c47-48a3-9bba-6f169f2a32c8" (UID: "0582f964-2c47-48a3-9bba-6f169f2a32c8"). InnerVolumeSpecName "kube-api-access-plw6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.428559 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-internal-tls-certs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.429222 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-scripts\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.430285 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-public-tls-certs\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.431139 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0582f964-2c47-48a3-9bba-6f169f2a32c8" (UID: "0582f964-2c47-48a3-9bba-6f169f2a32c8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.432518 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-combined-ca-bundle\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.435480 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-config-data\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.441400 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cvv8\" (UniqueName: \"kubernetes.io/projected/8d68c35e-a1e4-46f6-a8d3-29cc8206eab3-kube-api-access-2cvv8\") pod \"placement-5cb6d76584-4n6sf\" (UID: \"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3\") " pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.454718 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0582f964-2c47-48a3-9bba-6f169f2a32c8" (UID: "0582f964-2c47-48a3-9bba-6f169f2a32c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.524981 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.525030 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plw6p\" (UniqueName: \"kubernetes.io/projected/0582f964-2c47-48a3-9bba-6f169f2a32c8-kube-api-access-plw6p\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.525040 4959 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0582f964-2c47-48a3-9bba-6f169f2a32c8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:53 crc kubenswrapper[4959]: I0121 13:30:53.642957 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.076632 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r9zsv" event={"ID":"0582f964-2c47-48a3-9bba-6f169f2a32c8","Type":"ContainerDied","Data":"56181717882e848cd8d3a708bf66e108fdbcaefdd8dc1e1d3b4a096e30553975"} Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.076687 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56181717882e848cd8d3a708bf66e108fdbcaefdd8dc1e1d3b4a096e30553975" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.076702 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r9zsv" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.102313 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cb6d76584-4n6sf"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.428173 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84d64bc77f-q9rgh"] Jan 21 13:30:54 crc kubenswrapper[4959]: E0121 13:30:54.428671 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0582f964-2c47-48a3-9bba-6f169f2a32c8" containerName="barbican-db-sync" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.428695 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0582f964-2c47-48a3-9bba-6f169f2a32c8" containerName="barbican-db-sync" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.428903 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0582f964-2c47-48a3-9bba-6f169f2a32c8" containerName="barbican-db-sync" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.430033 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.436569 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5fqfp" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.440528 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.440748 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.441785 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-config-data-custom\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.441833 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-config-data\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.441950 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60523f6c-8c7e-4591-b969-515e6d9ac271-logs\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.441977 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-combined-ca-bundle\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.442012 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqqn\" (UniqueName: \"kubernetes.io/projected/60523f6c-8c7e-4591-b969-515e6d9ac271-kube-api-access-ncqqn\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.472240 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d8765856b-nw6p9"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.473834 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.477643 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d64bc77f-q9rgh"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.484853 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d8765856b-nw6p9"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.485645 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543497 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc944bb-6924-4282-bd07-5e221f7c7460-logs\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543641 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbfrn\" (UniqueName: \"kubernetes.io/projected/0cc944bb-6924-4282-bd07-5e221f7c7460-kube-api-access-nbfrn\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543693 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60523f6c-8c7e-4591-b969-515e6d9ac271-logs\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543719 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-config-data-custom\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543750 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-combined-ca-bundle\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543784 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqqn\" (UniqueName: \"kubernetes.io/projected/60523f6c-8c7e-4591-b969-515e6d9ac271-kube-api-access-ncqqn\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543836 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-config-data\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543867 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-combined-ca-bundle\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543900 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-config-data-custom\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.543928 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-config-data\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.545792 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60523f6c-8c7e-4591-b969-515e6d9ac271-logs\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.557210 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-config-data\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.560878 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-config-data-custom\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.576822 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60523f6c-8c7e-4591-b969-515e6d9ac271-combined-ca-bundle\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.582319 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9gtzw"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.583719 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.601473 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqqn\" (UniqueName: \"kubernetes.io/projected/60523f6c-8c7e-4591-b969-515e6d9ac271-kube-api-access-ncqqn\") pod \"barbican-worker-84d64bc77f-q9rgh\" (UID: \"60523f6c-8c7e-4591-b969-515e6d9ac271\") " pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.634854 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9gtzw"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648448 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648511 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbfrn\" (UniqueName: \"kubernetes.io/projected/0cc944bb-6924-4282-bd07-5e221f7c7460-kube-api-access-nbfrn\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648544 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-config-data-custom\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648568 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8rk6\" (UniqueName: \"kubernetes.io/projected/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-kube-api-access-x8rk6\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648601 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648624 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-config-data\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648649 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-combined-ca-bundle\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648698 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648714 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-config\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.648735 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc944bb-6924-4282-bd07-5e221f7c7460-logs\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.649159 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc944bb-6924-4282-bd07-5e221f7c7460-logs\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.659323 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-combined-ca-bundle\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.665716 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-config-data-custom\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.666847 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc944bb-6924-4282-bd07-5e221f7c7460-config-data\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.721780 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbfrn\" (UniqueName: \"kubernetes.io/projected/0cc944bb-6924-4282-bd07-5e221f7c7460-kube-api-access-nbfrn\") pod \"barbican-keystone-listener-d8765856b-nw6p9\" (UID: \"0cc944bb-6924-4282-bd07-5e221f7c7460\") " pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.754044 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8rk6\" (UniqueName: \"kubernetes.io/projected/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-kube-api-access-x8rk6\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.754158 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.754213 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.754231 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-config\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.754296 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.755413 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.755912 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-dns-svc\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.756568 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5798b7b654-h8krc"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.757811 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.761465 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.767929 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5798b7b654-h8krc"] Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.844876 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-config\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.845273 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.845489 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d64bc77f-q9rgh" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.845271 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.858520 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-combined-ca-bundle\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.858577 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-logs\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.858603 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.858622 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data-custom\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.858737 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gkj\" (UniqueName: \"kubernetes.io/projected/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-kube-api-access-q9gkj\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.861255 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8rk6\" (UniqueName: \"kubernetes.io/projected/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-kube-api-access-x8rk6\") pod \"dnsmasq-dns-7f46f79845-9gtzw\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.937107 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.961000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gkj\" (UniqueName: \"kubernetes.io/projected/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-kube-api-access-q9gkj\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.961058 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-combined-ca-bundle\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.961087 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-logs\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.961128 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.961146 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data-custom\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.967263 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data-custom\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.967432 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-logs\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.968287 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-combined-ca-bundle\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.969175 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:54 crc kubenswrapper[4959]: I0121 13:30:54.986907 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gkj\" (UniqueName: \"kubernetes.io/projected/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-kube-api-access-q9gkj\") pod \"barbican-api-5798b7b654-h8krc\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.097408 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cb6d76584-4n6sf" event={"ID":"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3","Type":"ContainerStarted","Data":"85b49964ab188a40486f096c7cc1b8ca275e9ae56bcc3d7646e3db45799279c0"} Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.097727 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cb6d76584-4n6sf" event={"ID":"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3","Type":"ContainerStarted","Data":"d071edeacbcdfcb1b529afb895fe968bc7e14ee4729e88043f98ba06b546243d"} Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.151520 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.390174 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d64bc77f-q9rgh"] Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.395665 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d8765856b-nw6p9"] Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.573074 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9gtzw"] Jan 21 13:30:55 crc kubenswrapper[4959]: W0121 13:30:55.579016 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod983fd75a_2236_4b6b_a4df_c6d9fd5524d2.slice/crio-e315e1d217f5c38f52766f4c7cc21ed5274969813521640ba70d74352ec90106 WatchSource:0}: Error finding container e315e1d217f5c38f52766f4c7cc21ed5274969813521640ba70d74352ec90106: Status 404 returned error can't find the container with id e315e1d217f5c38f52766f4c7cc21ed5274969813521640ba70d74352ec90106 Jan 21 13:30:55 crc kubenswrapper[4959]: I0121 13:30:55.726015 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5798b7b654-h8krc"] Jan 21 13:30:55 crc kubenswrapper[4959]: W0121 13:30:55.741192 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode263bff8_6bc4_4f49_9355_f37d8fd1e7fd.slice/crio-40ddb0c89ee5952d84102316a0ee99b7fa43432674d580f0af4ab23410934b76 WatchSource:0}: Error finding container 40ddb0c89ee5952d84102316a0ee99b7fa43432674d580f0af4ab23410934b76: Status 404 returned error can't find the container with id 40ddb0c89ee5952d84102316a0ee99b7fa43432674d580f0af4ab23410934b76 Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.106482 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" event={"ID":"0cc944bb-6924-4282-bd07-5e221f7c7460","Type":"ContainerStarted","Data":"4a985634d2f0a55c5c297631c37b35c1a2134185c1cda30191f7c229ad733b3e"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.112367 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5798b7b654-h8krc" event={"ID":"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd","Type":"ContainerStarted","Data":"735a14a0b072b9c7f6f38fa1e81fc4a539fca335a9710983f43afad3358ed91f"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.112424 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5798b7b654-h8krc" event={"ID":"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd","Type":"ContainerStarted","Data":"40ddb0c89ee5952d84102316a0ee99b7fa43432674d580f0af4ab23410934b76"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.113646 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.113680 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.115662 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cb6d76584-4n6sf" event={"ID":"8d68c35e-a1e4-46f6-a8d3-29cc8206eab3","Type":"ContainerStarted","Data":"9ff3d83aaa611dcab42319d6cec6f56c8f8f7d98ee9c4f54aa42d0ca86d8b963"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.116374 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.116420 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.119196 4959 generic.go:334] "Generic (PLEG): container finished" podID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerID="c9cb53b0fbe324498b2b2a7589d58139f314c45c8e4573825fc01196680f3f76" exitCode=0 Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.119248 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" event={"ID":"983fd75a-2236-4b6b-a4df-c6d9fd5524d2","Type":"ContainerDied","Data":"c9cb53b0fbe324498b2b2a7589d58139f314c45c8e4573825fc01196680f3f76"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.119265 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" event={"ID":"983fd75a-2236-4b6b-a4df-c6d9fd5524d2","Type":"ContainerStarted","Data":"e315e1d217f5c38f52766f4c7cc21ed5274969813521640ba70d74352ec90106"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.121934 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d64bc77f-q9rgh" event={"ID":"60523f6c-8c7e-4591-b969-515e6d9ac271","Type":"ContainerStarted","Data":"c772ee00f6d376e4584edafb4e53365498970fe382dfc0c810c93310286f95ab"} Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.141148 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5798b7b654-h8krc" podStartSLOduration=2.141125276 podStartE2EDuration="2.141125276s" podCreationTimestamp="2026-01-21 13:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:56.133843795 +0000 UTC m=+1317.096874348" watchObservedRunningTime="2026-01-21 13:30:56.141125276 +0000 UTC m=+1317.104155819" Jan 21 13:30:56 crc kubenswrapper[4959]: I0121 13:30:56.173240 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5cb6d76584-4n6sf" podStartSLOduration=3.173217614 podStartE2EDuration="3.173217614s" podCreationTimestamp="2026-01-21 13:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:56.164682728 +0000 UTC m=+1317.127713271" watchObservedRunningTime="2026-01-21 13:30:56.173217614 +0000 UTC m=+1317.136248167" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.134786 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5798b7b654-h8krc" event={"ID":"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd","Type":"ContainerStarted","Data":"4c615788b31a2791dc69fed78d27145d1829d92d2d27305e0ef86ee05ac22051"} Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.138016 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" event={"ID":"983fd75a-2236-4b6b-a4df-c6d9fd5524d2","Type":"ContainerStarted","Data":"60c15deb93c814c35192477a53a0cb00b1f8ad4bed648d2f1b7f2c25fd57215c"} Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.171405 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" podStartSLOduration=3.171384209 podStartE2EDuration="3.171384209s" podCreationTimestamp="2026-01-21 13:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:57.160419425 +0000 UTC m=+1318.123449968" watchObservedRunningTime="2026-01-21 13:30:57.171384209 +0000 UTC m=+1318.134414742" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.446224 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-765d4c965b-4xv4p"] Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.452064 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.454180 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.454386 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.470532 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-765d4c965b-4xv4p"] Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613441 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-public-tls-certs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613563 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbbb9f49-5dce-421f-895a-8004dce6f9ad-logs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613590 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-combined-ca-bundle\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613666 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-config-data\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613737 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-internal-tls-certs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613759 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-config-data-custom\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.613774 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzr2l\" (UniqueName: \"kubernetes.io/projected/cbbb9f49-5dce-421f-895a-8004dce6f9ad-kube-api-access-lzr2l\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.715639 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-internal-tls-certs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.715694 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-config-data-custom\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.715721 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzr2l\" (UniqueName: \"kubernetes.io/projected/cbbb9f49-5dce-421f-895a-8004dce6f9ad-kube-api-access-lzr2l\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.715762 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-public-tls-certs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.715846 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbbb9f49-5dce-421f-895a-8004dce6f9ad-logs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.715905 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-combined-ca-bundle\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.716345 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbbb9f49-5dce-421f-895a-8004dce6f9ad-logs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.716752 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-config-data\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.719610 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-internal-tls-certs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.719673 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-public-tls-certs\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.719715 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-combined-ca-bundle\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.720869 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-config-data-custom\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.721697 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbb9f49-5dce-421f-895a-8004dce6f9ad-config-data\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.738238 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzr2l\" (UniqueName: \"kubernetes.io/projected/cbbb9f49-5dce-421f-895a-8004dce6f9ad-kube-api-access-lzr2l\") pod \"barbican-api-765d4c965b-4xv4p\" (UID: \"cbbb9f49-5dce-421f-895a-8004dce6f9ad\") " pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:57 crc kubenswrapper[4959]: I0121 13:30:57.778581 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.148212 4959 generic.go:334] "Generic (PLEG): container finished" podID="ac8172e9-2396-4f6b-a632-8e32400aea67" containerID="d4f2eee4ac0b3985c4aa47ece6bc98dc713f27db40b41de0b80d6acdf1cbdc3d" exitCode=0 Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.148914 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbs79" event={"ID":"ac8172e9-2396-4f6b-a632-8e32400aea67","Type":"ContainerDied","Data":"d4f2eee4ac0b3985c4aa47ece6bc98dc713f27db40b41de0b80d6acdf1cbdc3d"} Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.154684 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" event={"ID":"0cc944bb-6924-4282-bd07-5e221f7c7460","Type":"ContainerStarted","Data":"e73d582a9b12ba9c81ff67f39210c939913f081464ea2902b9426c7c03d3465a"} Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.154726 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" event={"ID":"0cc944bb-6924-4282-bd07-5e221f7c7460","Type":"ContainerStarted","Data":"f3b0b3bb78de4c0b01515f8260163257a8bc64e896972d5c0476aea526b415d4"} Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.157693 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d64bc77f-q9rgh" event={"ID":"60523f6c-8c7e-4591-b969-515e6d9ac271","Type":"ContainerStarted","Data":"3bae86ff9e27f14e3cffe2a6ece5163cfb826f2da9333e4f35ba080606bca02b"} Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.158603 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.158618 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d64bc77f-q9rgh" event={"ID":"60523f6c-8c7e-4591-b969-515e6d9ac271","Type":"ContainerStarted","Data":"c1b86ec3170fa8d03786f4481cd9aa8fd33db31be89a691db44b8a986e61dd3c"} Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.197994 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84d64bc77f-q9rgh" podStartSLOduration=2.396710026 podStartE2EDuration="4.19797563s" podCreationTimestamp="2026-01-21 13:30:54 +0000 UTC" firstStartedPulling="2026-01-21 13:30:55.445971112 +0000 UTC m=+1316.409001655" lastFinishedPulling="2026-01-21 13:30:57.247236716 +0000 UTC m=+1318.210267259" observedRunningTime="2026-01-21 13:30:58.188532549 +0000 UTC m=+1319.151563092" watchObservedRunningTime="2026-01-21 13:30:58.19797563 +0000 UTC m=+1319.161006173" Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.212349 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d8765856b-nw6p9" podStartSLOduration=2.412642276 podStartE2EDuration="4.212334197s" podCreationTimestamp="2026-01-21 13:30:54 +0000 UTC" firstStartedPulling="2026-01-21 13:30:55.447625868 +0000 UTC m=+1316.410656411" lastFinishedPulling="2026-01-21 13:30:57.247317799 +0000 UTC m=+1318.210348332" observedRunningTime="2026-01-21 13:30:58.210506266 +0000 UTC m=+1319.173536819" watchObservedRunningTime="2026-01-21 13:30:58.212334197 +0000 UTC m=+1319.175364740" Jan 21 13:30:58 crc kubenswrapper[4959]: W0121 13:30:58.266479 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbbb9f49_5dce_421f_895a_8004dce6f9ad.slice/crio-79e1741297f11e34d2ff0acff33e7c63ea056a55848dcbbe68e60b5c60c7c6c5 WatchSource:0}: Error finding container 79e1741297f11e34d2ff0acff33e7c63ea056a55848dcbbe68e60b5c60c7c6c5: Status 404 returned error can't find the container with id 79e1741297f11e34d2ff0acff33e7c63ea056a55848dcbbe68e60b5c60c7c6c5 Jan 21 13:30:58 crc kubenswrapper[4959]: I0121 13:30:58.269259 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-765d4c965b-4xv4p"] Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.166048 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765d4c965b-4xv4p" event={"ID":"cbbb9f49-5dce-421f-895a-8004dce6f9ad","Type":"ContainerStarted","Data":"b969cc4ae57ac4d11498b42352a4e25743ffeb3dcbcbb0bf953dd422c093199d"} Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.166398 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765d4c965b-4xv4p" event={"ID":"cbbb9f49-5dce-421f-895a-8004dce6f9ad","Type":"ContainerStarted","Data":"1f98766e37cb03313305042fca857deb7b2050f8950e4c5c0108e3b2657dad5a"} Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.167609 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.167639 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765d4c965b-4xv4p" event={"ID":"cbbb9f49-5dce-421f-895a-8004dce6f9ad","Type":"ContainerStarted","Data":"79e1741297f11e34d2ff0acff33e7c63ea056a55848dcbbe68e60b5c60c7c6c5"} Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.519458 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbs79" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.539637 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-765d4c965b-4xv4p" podStartSLOduration=2.539615743 podStartE2EDuration="2.539615743s" podCreationTimestamp="2026-01-21 13:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:30:59.188456942 +0000 UTC m=+1320.151487495" watchObservedRunningTime="2026-01-21 13:30:59.539615743 +0000 UTC m=+1320.502646306" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.656537 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-db-sync-config-data\") pod \"ac8172e9-2396-4f6b-a632-8e32400aea67\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.656598 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ac8172e9-2396-4f6b-a632-8e32400aea67-etc-machine-id\") pod \"ac8172e9-2396-4f6b-a632-8e32400aea67\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.656810 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-config-data\") pod \"ac8172e9-2396-4f6b-a632-8e32400aea67\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.656840 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-combined-ca-bundle\") pod \"ac8172e9-2396-4f6b-a632-8e32400aea67\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.656882 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-scripts\") pod \"ac8172e9-2396-4f6b-a632-8e32400aea67\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.656911 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsm4b\" (UniqueName: \"kubernetes.io/projected/ac8172e9-2396-4f6b-a632-8e32400aea67-kube-api-access-bsm4b\") pod \"ac8172e9-2396-4f6b-a632-8e32400aea67\" (UID: \"ac8172e9-2396-4f6b-a632-8e32400aea67\") " Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.658222 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac8172e9-2396-4f6b-a632-8e32400aea67-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ac8172e9-2396-4f6b-a632-8e32400aea67" (UID: "ac8172e9-2396-4f6b-a632-8e32400aea67"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.664059 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ac8172e9-2396-4f6b-a632-8e32400aea67" (UID: "ac8172e9-2396-4f6b-a632-8e32400aea67"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.664190 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8172e9-2396-4f6b-a632-8e32400aea67-kube-api-access-bsm4b" (OuterVolumeSpecName: "kube-api-access-bsm4b") pod "ac8172e9-2396-4f6b-a632-8e32400aea67" (UID: "ac8172e9-2396-4f6b-a632-8e32400aea67"). InnerVolumeSpecName "kube-api-access-bsm4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.666276 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-scripts" (OuterVolumeSpecName: "scripts") pod "ac8172e9-2396-4f6b-a632-8e32400aea67" (UID: "ac8172e9-2396-4f6b-a632-8e32400aea67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.702871 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac8172e9-2396-4f6b-a632-8e32400aea67" (UID: "ac8172e9-2396-4f6b-a632-8e32400aea67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.722691 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-config-data" (OuterVolumeSpecName: "config-data") pod "ac8172e9-2396-4f6b-a632-8e32400aea67" (UID: "ac8172e9-2396-4f6b-a632-8e32400aea67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.759219 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.759261 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.759272 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.759281 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsm4b\" (UniqueName: \"kubernetes.io/projected/ac8172e9-2396-4f6b-a632-8e32400aea67-kube-api-access-bsm4b\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.759291 4959 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8172e9-2396-4f6b-a632-8e32400aea67-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:59 crc kubenswrapper[4959]: I0121 13:30:59.759299 4959 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ac8172e9-2396-4f6b-a632-8e32400aea67-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.174860 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kbs79" event={"ID":"ac8172e9-2396-4f6b-a632-8e32400aea67","Type":"ContainerDied","Data":"8da5f2219f32a338b9636b3ec58a867e5c8a570b36e441265abc7cafd841f630"} Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.174899 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da5f2219f32a338b9636b3ec58a867e5c8a570b36e441265abc7cafd841f630" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.176119 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kbs79" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.186290 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vl9c7" event={"ID":"431f411c-8ae5-42e7-b76a-4ca21314112a","Type":"ContainerDied","Data":"6752eaa93b9b2ea81f03375078c6d56e8911e09dc9579ec6488b009dbf950df8"} Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.186384 4959 generic.go:334] "Generic (PLEG): container finished" podID="431f411c-8ae5-42e7-b76a-4ca21314112a" containerID="6752eaa93b9b2ea81f03375078c6d56e8911e09dc9579ec6488b009dbf950df8" exitCode=0 Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.187461 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.478236 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:00 crc kubenswrapper[4959]: E0121 13:31:00.478959 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8172e9-2396-4f6b-a632-8e32400aea67" containerName="cinder-db-sync" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.478980 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8172e9-2396-4f6b-a632-8e32400aea67" containerName="cinder-db-sync" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.479267 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8172e9-2396-4f6b-a632-8e32400aea67" containerName="cinder-db-sync" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.482079 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.484853 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-npplb" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.485549 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.485992 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.486274 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.519143 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.658436 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9gtzw"] Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.658999 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerName="dnsmasq-dns" containerID="cri-o://60c15deb93c814c35192477a53a0cb00b1f8ad4bed648d2f1b7f2c25fd57215c" gracePeriod=10 Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.672941 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fa33b1f-15fa-4938-b549-9ae362cd6918-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.672995 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.673011 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4mw\" (UniqueName: \"kubernetes.io/projected/8fa33b1f-15fa-4938-b549-9ae362cd6918-kube-api-access-4p4mw\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.673159 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-scripts\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.674656 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.674839 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.736742 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj"] Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.783127 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj"] Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.783252 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.783616 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.783762 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fa33b1f-15fa-4938-b549-9ae362cd6918-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.783878 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.783972 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4mw\" (UniqueName: \"kubernetes.io/projected/8fa33b1f-15fa-4938-b549-9ae362cd6918-kube-api-access-4p4mw\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.784078 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-scripts\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.784223 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fa33b1f-15fa-4938-b549-9ae362cd6918-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.784376 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.791681 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-scripts\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.816880 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.826035 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.837997 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.844621 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4mw\" (UniqueName: \"kubernetes.io/projected/8fa33b1f-15fa-4938-b549-9ae362cd6918-kube-api-access-4p4mw\") pod \"cinder-scheduler-0\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.872151 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.873514 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.884597 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.898913 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96q7n\" (UniqueName: \"kubernetes.io/projected/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-kube-api-access-96q7n\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.898975 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-config\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.898997 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899022 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9728ce3e-ab6e-43dc-8860-f14875ce3f71-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899110 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhgnt\" (UniqueName: \"kubernetes.io/projected/9728ce3e-ab6e-43dc-8860-f14875ce3f71-kube-api-access-qhgnt\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899140 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899204 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899238 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899277 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-scripts\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899331 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data-custom\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899351 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9728ce3e-ab6e-43dc-8860-f14875ce3f71-logs\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.899375 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:00 crc kubenswrapper[4959]: I0121 13:31:00.905030 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001170 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-scripts\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001238 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data-custom\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001256 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9728ce3e-ab6e-43dc-8860-f14875ce3f71-logs\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001276 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001305 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96q7n\" (UniqueName: \"kubernetes.io/projected/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-kube-api-access-96q7n\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001324 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-config\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001341 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001359 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9728ce3e-ab6e-43dc-8860-f14875ce3f71-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001474 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhgnt\" (UniqueName: \"kubernetes.io/projected/9728ce3e-ab6e-43dc-8860-f14875ce3f71-kube-api-access-qhgnt\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001508 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001545 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.001568 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.006920 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9728ce3e-ab6e-43dc-8860-f14875ce3f71-logs\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.007439 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-config\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.007856 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.008140 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.008620 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9728ce3e-ab6e-43dc-8860-f14875ce3f71-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.009458 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.010053 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-dns-svc\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.010623 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-scripts\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.015264 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data-custom\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.035297 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.040068 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96q7n\" (UniqueName: \"kubernetes.io/projected/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-kube-api-access-96q7n\") pod \"dnsmasq-dns-5f7f9f7cbf-8qhcj\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.041130 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhgnt\" (UniqueName: \"kubernetes.io/projected/9728ce3e-ab6e-43dc-8860-f14875ce3f71-kube-api-access-qhgnt\") pod \"cinder-api-0\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.138288 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.206357 4959 generic.go:334] "Generic (PLEG): container finished" podID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerID="60c15deb93c814c35192477a53a0cb00b1f8ad4bed648d2f1b7f2c25fd57215c" exitCode=0 Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.206592 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" event={"ID":"983fd75a-2236-4b6b-a4df-c6d9fd5524d2","Type":"ContainerDied","Data":"60c15deb93c814c35192477a53a0cb00b1f8ad4bed648d2f1b7f2c25fd57215c"} Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.216406 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.235798 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.263586 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.305735 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-nb\") pod \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.305794 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8rk6\" (UniqueName: \"kubernetes.io/projected/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-kube-api-access-x8rk6\") pod \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.305831 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-dns-svc\") pod \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.305882 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-sb\") pod \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.305929 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-config\") pod \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\" (UID: \"983fd75a-2236-4b6b-a4df-c6d9fd5524d2\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.352507 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-kube-api-access-x8rk6" (OuterVolumeSpecName: "kube-api-access-x8rk6") pod "983fd75a-2236-4b6b-a4df-c6d9fd5524d2" (UID: "983fd75a-2236-4b6b-a4df-c6d9fd5524d2"). InnerVolumeSpecName "kube-api-access-x8rk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.412991 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8rk6\" (UniqueName: \"kubernetes.io/projected/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-kube-api-access-x8rk6\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.418250 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "983fd75a-2236-4b6b-a4df-c6d9fd5524d2" (UID: "983fd75a-2236-4b6b-a4df-c6d9fd5524d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.423455 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-config" (OuterVolumeSpecName: "config") pod "983fd75a-2236-4b6b-a4df-c6d9fd5524d2" (UID: "983fd75a-2236-4b6b-a4df-c6d9fd5524d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.438177 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "983fd75a-2236-4b6b-a4df-c6d9fd5524d2" (UID: "983fd75a-2236-4b6b-a4df-c6d9fd5524d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.475595 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "983fd75a-2236-4b6b-a4df-c6d9fd5524d2" (UID: "983fd75a-2236-4b6b-a4df-c6d9fd5524d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.515353 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.515397 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.515407 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.515415 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983fd75a-2236-4b6b-a4df-c6d9fd5524d2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.668333 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.822138 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-combined-ca-bundle\") pod \"431f411c-8ae5-42e7-b76a-4ca21314112a\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.822337 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-config\") pod \"431f411c-8ae5-42e7-b76a-4ca21314112a\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.822372 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78l65\" (UniqueName: \"kubernetes.io/projected/431f411c-8ae5-42e7-b76a-4ca21314112a-kube-api-access-78l65\") pod \"431f411c-8ae5-42e7-b76a-4ca21314112a\" (UID: \"431f411c-8ae5-42e7-b76a-4ca21314112a\") " Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.839375 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431f411c-8ae5-42e7-b76a-4ca21314112a-kube-api-access-78l65" (OuterVolumeSpecName: "kube-api-access-78l65") pod "431f411c-8ae5-42e7-b76a-4ca21314112a" (UID: "431f411c-8ae5-42e7-b76a-4ca21314112a"). InnerVolumeSpecName "kube-api-access-78l65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.868265 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "431f411c-8ae5-42e7-b76a-4ca21314112a" (UID: "431f411c-8ae5-42e7-b76a-4ca21314112a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.879151 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:01 crc kubenswrapper[4959]: W0121 13:31:01.882714 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fa33b1f_15fa_4938_b549_9ae362cd6918.slice/crio-e442b3d17a0aebb6a51bf733a2fbd9b923b8c1ac5bc4d1d576baa490b1cc597e WatchSource:0}: Error finding container e442b3d17a0aebb6a51bf733a2fbd9b923b8c1ac5bc4d1d576baa490b1cc597e: Status 404 returned error can't find the container with id e442b3d17a0aebb6a51bf733a2fbd9b923b8c1ac5bc4d1d576baa490b1cc597e Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.893716 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-config" (OuterVolumeSpecName: "config") pod "431f411c-8ae5-42e7-b76a-4ca21314112a" (UID: "431f411c-8ae5-42e7-b76a-4ca21314112a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.929328 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.929372 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78l65\" (UniqueName: \"kubernetes.io/projected/431f411c-8ae5-42e7-b76a-4ca21314112a-kube-api-access-78l65\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:01 crc kubenswrapper[4959]: I0121 13:31:01.929383 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431f411c-8ae5-42e7-b76a-4ca21314112a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:02 crc kubenswrapper[4959]: W0121 13:31:02.001636 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a7c44d_5824_4fbc_8f45_e421a2bc1b8b.slice/crio-2ec99dfabd5199a317122b9fb1934ca5240b9c9499e4f3f0dd3261b694cd42b9 WatchSource:0}: Error finding container 2ec99dfabd5199a317122b9fb1934ca5240b9c9499e4f3f0dd3261b694cd42b9: Status 404 returned error can't find the container with id 2ec99dfabd5199a317122b9fb1934ca5240b9c9499e4f3f0dd3261b694cd42b9 Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.001783 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.121027 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:02 crc kubenswrapper[4959]: W0121 13:31:02.129553 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9728ce3e_ab6e_43dc_8860_f14875ce3f71.slice/crio-42ac888e9d7ca8b69029b82c85f24da9cc615fe0fb5b4156251a074703cea808 WatchSource:0}: Error finding container 42ac888e9d7ca8b69029b82c85f24da9cc615fe0fb5b4156251a074703cea808: Status 404 returned error can't find the container with id 42ac888e9d7ca8b69029b82c85f24da9cc615fe0fb5b4156251a074703cea808 Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.230926 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fa33b1f-15fa-4938-b549-9ae362cd6918","Type":"ContainerStarted","Data":"e442b3d17a0aebb6a51bf733a2fbd9b923b8c1ac5bc4d1d576baa490b1cc597e"} Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.235895 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9728ce3e-ab6e-43dc-8860-f14875ce3f71","Type":"ContainerStarted","Data":"42ac888e9d7ca8b69029b82c85f24da9cc615fe0fb5b4156251a074703cea808"} Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.241356 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vl9c7" event={"ID":"431f411c-8ae5-42e7-b76a-4ca21314112a","Type":"ContainerDied","Data":"6eca0ab4c27088f3c38fe7f361931b10251801e2b04d6054f89a520bf0f9338e"} Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.241405 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eca0ab4c27088f3c38fe7f361931b10251801e2b04d6054f89a520bf0f9338e" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.241486 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vl9c7" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.247940 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" event={"ID":"983fd75a-2236-4b6b-a4df-c6d9fd5524d2","Type":"ContainerDied","Data":"e315e1d217f5c38f52766f4c7cc21ed5274969813521640ba70d74352ec90106"} Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.247996 4959 scope.go:117] "RemoveContainer" containerID="60c15deb93c814c35192477a53a0cb00b1f8ad4bed648d2f1b7f2c25fd57215c" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.248417 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-9gtzw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.250827 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" event={"ID":"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b","Type":"ContainerStarted","Data":"2ec99dfabd5199a317122b9fb1934ca5240b9c9499e4f3f0dd3261b694cd42b9"} Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.345552 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9gtzw"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.354707 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-9gtzw"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.359755 4959 scope.go:117] "RemoveContainer" containerID="c9cb53b0fbe324498b2b2a7589d58139f314c45c8e4573825fc01196680f3f76" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.438888 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.498553 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-p22bw"] Jan 21 13:31:02 crc kubenswrapper[4959]: E0121 13:31:02.498951 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431f411c-8ae5-42e7-b76a-4ca21314112a" containerName="neutron-db-sync" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.498974 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="431f411c-8ae5-42e7-b76a-4ca21314112a" containerName="neutron-db-sync" Jan 21 13:31:02 crc kubenswrapper[4959]: E0121 13:31:02.499017 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerName="init" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.499027 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerName="init" Jan 21 13:31:02 crc kubenswrapper[4959]: E0121 13:31:02.499043 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerName="dnsmasq-dns" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.499053 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerName="dnsmasq-dns" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.499278 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" containerName="dnsmasq-dns" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.499319 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="431f411c-8ae5-42e7-b76a-4ca21314112a" containerName="neutron-db-sync" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.506229 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.537174 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-p22bw"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.554076 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2sf\" (UniqueName: \"kubernetes.io/projected/00cdbda8-c419-4768-ac68-598950ed9387-kube-api-access-6p2sf\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.554151 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-dns-svc\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.554253 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.554283 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-config\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.554316 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.656029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-dns-svc\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.656192 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.656237 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-config\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.656280 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.656329 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p2sf\" (UniqueName: \"kubernetes.io/projected/00cdbda8-c419-4768-ac68-598950ed9387-kube-api-access-6p2sf\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.656505 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.657314 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-dns-svc\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.657459 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.657635 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-config\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.658195 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.683084 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p2sf\" (UniqueName: \"kubernetes.io/projected/00cdbda8-c419-4768-ac68-598950ed9387-kube-api-access-6p2sf\") pod \"dnsmasq-dns-58db5546cc-p22bw\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.720162 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d8c7687b4-bsf2j"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.721793 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.727140 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.727832 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.728466 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lc5wb" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.729251 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.737578 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d8c7687b4-bsf2j"] Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.759062 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-ovndb-tls-certs\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.759159 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-httpd-config\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.759222 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-combined-ca-bundle\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.759330 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjsd5\" (UniqueName: \"kubernetes.io/projected/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-kube-api-access-sjsd5\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.759388 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-config\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.826641 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.861518 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-ovndb-tls-certs\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.861585 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-httpd-config\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.861670 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-combined-ca-bundle\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.861766 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjsd5\" (UniqueName: \"kubernetes.io/projected/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-kube-api-access-sjsd5\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.861824 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-config\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.899967 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-ovndb-tls-certs\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.903180 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-combined-ca-bundle\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.904215 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-config\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.904759 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-httpd-config\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.905518 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjsd5\" (UniqueName: \"kubernetes.io/projected/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-kube-api-access-sjsd5\") pod \"neutron-5d8c7687b4-bsf2j\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:02 crc kubenswrapper[4959]: I0121 13:31:02.989560 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.076847 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.273662 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9728ce3e-ab6e-43dc-8860-f14875ce3f71","Type":"ContainerStarted","Data":"35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f"} Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.276883 4959 generic.go:334] "Generic (PLEG): container finished" podID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerID="109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3" exitCode=0 Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.276952 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" event={"ID":"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b","Type":"ContainerDied","Data":"109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3"} Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.355149 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983fd75a-2236-4b6b-a4df-c6d9fd5524d2" path="/var/lib/kubelet/pods/983fd75a-2236-4b6b-a4df-c6d9fd5524d2/volumes" Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.546653 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-p22bw"] Jan 21 13:31:03 crc kubenswrapper[4959]: W0121 13:31:03.556635 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cdbda8_c419_4768_ac68_598950ed9387.slice/crio-710cdd360c2d5ec5dc75f75c1924f9bd5b848239cfa04976d668f864755e2a6f WatchSource:0}: Error finding container 710cdd360c2d5ec5dc75f75c1924f9bd5b848239cfa04976d668f864755e2a6f: Status 404 returned error can't find the container with id 710cdd360c2d5ec5dc75f75c1924f9bd5b848239cfa04976d668f864755e2a6f Jan 21 13:31:03 crc kubenswrapper[4959]: I0121 13:31:03.798440 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d8c7687b4-bsf2j"] Jan 21 13:31:03 crc kubenswrapper[4959]: W0121 13:31:03.827158 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44ffb7d9_1580_4d6c_bceb_5fa26a5f6d54.slice/crio-080d0dd96d035b216bcdea78627c947493ccc5d6459b921a79fb85491bc7bf9d WatchSource:0}: Error finding container 080d0dd96d035b216bcdea78627c947493ccc5d6459b921a79fb85491bc7bf9d: Status 404 returned error can't find the container with id 080d0dd96d035b216bcdea78627c947493ccc5d6459b921a79fb85491bc7bf9d Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.355200 4959 generic.go:334] "Generic (PLEG): container finished" podID="00cdbda8-c419-4768-ac68-598950ed9387" containerID="e1bcedb026f22690f05b7582e9f5509fab6521b601ce73b5ff5ac06cfff4101a" exitCode=0 Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.355498 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" event={"ID":"00cdbda8-c419-4768-ac68-598950ed9387","Type":"ContainerDied","Data":"e1bcedb026f22690f05b7582e9f5509fab6521b601ce73b5ff5ac06cfff4101a"} Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.355572 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" event={"ID":"00cdbda8-c419-4768-ac68-598950ed9387","Type":"ContainerStarted","Data":"710cdd360c2d5ec5dc75f75c1924f9bd5b848239cfa04976d668f864755e2a6f"} Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.372397 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8c7687b4-bsf2j" event={"ID":"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54","Type":"ContainerStarted","Data":"f3dc89d0c9f62f630cae2a945c9dd4602ffadce1b4d490879f863549fc28ad34"} Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.372450 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8c7687b4-bsf2j" event={"ID":"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54","Type":"ContainerStarted","Data":"080d0dd96d035b216bcdea78627c947493ccc5d6459b921a79fb85491bc7bf9d"} Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.384464 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" event={"ID":"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b","Type":"ContainerStarted","Data":"ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b"} Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.384673 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerName="dnsmasq-dns" containerID="cri-o://ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b" gracePeriod=10 Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.385012 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.527641 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9728ce3e-ab6e-43dc-8860-f14875ce3f71","Type":"ContainerStarted","Data":"a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef"} Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.527827 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api-log" containerID="cri-o://35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f" gracePeriod=30 Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.527942 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.527961 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api" containerID="cri-o://a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef" gracePeriod=30 Jan 21 13:31:04 crc kubenswrapper[4959]: I0121 13:31:04.553475 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" podStartSLOduration=4.553459053 podStartE2EDuration="4.553459053s" podCreationTimestamp="2026-01-21 13:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:04.550901432 +0000 UTC m=+1325.513931965" watchObservedRunningTime="2026-01-21 13:31:04.553459053 +0000 UTC m=+1325.516489596" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.124078 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.147265 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.147238254 podStartE2EDuration="5.147238254s" podCreationTimestamp="2026-01-21 13:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:04.581399595 +0000 UTC m=+1325.544430138" watchObservedRunningTime="2026-01-21 13:31:05.147238254 +0000 UTC m=+1326.110268797" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.254429 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-config\") pod \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.365394 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-sb\") pod \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.365455 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-nb\") pod \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.365506 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96q7n\" (UniqueName: \"kubernetes.io/projected/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-kube-api-access-96q7n\") pod \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.365617 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-dns-svc\") pod \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\" (UID: \"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b\") " Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.367162 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-config" (OuterVolumeSpecName: "config") pod "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" (UID: "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.394804 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-kube-api-access-96q7n" (OuterVolumeSpecName: "kube-api-access-96q7n") pod "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" (UID: "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b"). InnerVolumeSpecName "kube-api-access-96q7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.468246 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.468286 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96q7n\" (UniqueName: \"kubernetes.io/projected/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-kube-api-access-96q7n\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.475685 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" (UID: "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.481688 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" (UID: "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.500546 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" (UID: "01a7c44d-5824-4fbc-8f45-e421a2bc1b8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.538926 4959 generic.go:334] "Generic (PLEG): container finished" podID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerID="35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f" exitCode=143 Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.538996 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9728ce3e-ab6e-43dc-8860-f14875ce3f71","Type":"ContainerDied","Data":"35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f"} Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.549979 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8c7687b4-bsf2j" event={"ID":"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54","Type":"ContainerStarted","Data":"cd03d59c2a16a039669d3604509298a6e18d85b9c654dd979b8c3bf08234277e"} Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.550306 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.555539 4959 generic.go:334] "Generic (PLEG): container finished" podID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerID="ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b" exitCode=0 Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.555742 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.555888 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" event={"ID":"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b","Type":"ContainerDied","Data":"ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b"} Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.556260 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj" event={"ID":"01a7c44d-5824-4fbc-8f45-e421a2bc1b8b","Type":"ContainerDied","Data":"2ec99dfabd5199a317122b9fb1934ca5240b9c9499e4f3f0dd3261b694cd42b9"} Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.556371 4959 scope.go:117] "RemoveContainer" containerID="ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.558425 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fa33b1f-15fa-4938-b549-9ae362cd6918","Type":"ContainerStarted","Data":"6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5"} Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.570974 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.571006 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.571019 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.575198 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d8c7687b4-bsf2j" podStartSLOduration=3.575175619 podStartE2EDuration="3.575175619s" podCreationTimestamp="2026-01-21 13:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:05.572580357 +0000 UTC m=+1326.535610900" watchObservedRunningTime="2026-01-21 13:31:05.575175619 +0000 UTC m=+1326.538206162" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.607690 4959 scope.go:117] "RemoveContainer" containerID="109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.616408 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj"] Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.644245 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7f9f7cbf-8qhcj"] Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.664839 4959 scope.go:117] "RemoveContainer" containerID="ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b" Jan 21 13:31:05 crc kubenswrapper[4959]: E0121 13:31:05.671952 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b\": container with ID starting with ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b not found: ID does not exist" containerID="ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.671998 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b"} err="failed to get container status \"ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b\": rpc error: code = NotFound desc = could not find container \"ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b\": container with ID starting with ccb4560477c26b70ef3bb469d7f715cc896a388819086bb45c29fba32deb0e1b not found: ID does not exist" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.672029 4959 scope.go:117] "RemoveContainer" containerID="109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3" Jan 21 13:31:05 crc kubenswrapper[4959]: E0121 13:31:05.684328 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3\": container with ID starting with 109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3 not found: ID does not exist" containerID="109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.684381 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3"} err="failed to get container status \"109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3\": rpc error: code = NotFound desc = could not find container \"109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3\": container with ID starting with 109749e85ba4c31d7938a13dddedf269663c2118648d0c30c42bbe4c9f0469d3 not found: ID does not exist" Jan 21 13:31:05 crc kubenswrapper[4959]: I0121 13:31:05.720879 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:31:06 crc kubenswrapper[4959]: I0121 13:31:06.584607 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fa33b1f-15fa-4938-b549-9ae362cd6918","Type":"ContainerStarted","Data":"990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c"} Jan 21 13:31:06 crc kubenswrapper[4959]: I0121 13:31:06.591150 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" event={"ID":"00cdbda8-c419-4768-ac68-598950ed9387","Type":"ContainerStarted","Data":"9cec4098898f9ea337b9a4b23290bd85a8da4a4f8ede77a700bbda1843a7b6cb"} Jan 21 13:31:06 crc kubenswrapper[4959]: I0121 13:31:06.698418 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.119424115 podStartE2EDuration="6.698399402s" podCreationTimestamp="2026-01-21 13:31:00 +0000 UTC" firstStartedPulling="2026-01-21 13:31:01.887603948 +0000 UTC m=+1322.850634491" lastFinishedPulling="2026-01-21 13:31:03.466579235 +0000 UTC m=+1324.429609778" observedRunningTime="2026-01-21 13:31:06.68892713 +0000 UTC m=+1327.651957673" watchObservedRunningTime="2026-01-21 13:31:06.698399402 +0000 UTC m=+1327.661429945" Jan 21 13:31:06 crc kubenswrapper[4959]: I0121 13:31:06.710581 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" podStartSLOduration=4.710560448 podStartE2EDuration="4.710560448s" podCreationTimestamp="2026-01-21 13:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:06.710520487 +0000 UTC m=+1327.673551030" watchObservedRunningTime="2026-01-21 13:31:06.710560448 +0000 UTC m=+1327.673590981" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.014624 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77cdb9766f-rtq4k"] Jan 21 13:31:07 crc kubenswrapper[4959]: E0121 13:31:07.015085 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerName="init" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.015107 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerName="init" Jan 21 13:31:07 crc kubenswrapper[4959]: E0121 13:31:07.015122 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerName="dnsmasq-dns" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.015149 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerName="dnsmasq-dns" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.015344 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" containerName="dnsmasq-dns" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.016475 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.019619 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.021008 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.050920 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cdb9766f-rtq4k"] Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081069 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-internal-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081184 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-combined-ca-bundle\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081261 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-config\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081305 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvz9t\" (UniqueName: \"kubernetes.io/projected/3913238c-8062-4839-9106-ce99f45ccadf-kube-api-access-cvz9t\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081328 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-httpd-config\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081348 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-public-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.081406 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-ovndb-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.182796 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-internal-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.182902 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-combined-ca-bundle\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.182951 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-config\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.182994 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvz9t\" (UniqueName: \"kubernetes.io/projected/3913238c-8062-4839-9106-ce99f45ccadf-kube-api-access-cvz9t\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.183049 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-httpd-config\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.183076 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-public-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.183230 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-ovndb-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.191255 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-internal-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.192781 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-ovndb-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.193193 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-config\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.193876 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-public-tls-certs\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.200355 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-httpd-config\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.210889 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3913238c-8062-4839-9106-ce99f45ccadf-combined-ca-bundle\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.223098 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvz9t\" (UniqueName: \"kubernetes.io/projected/3913238c-8062-4839-9106-ce99f45ccadf-kube-api-access-cvz9t\") pod \"neutron-77cdb9766f-rtq4k\" (UID: \"3913238c-8062-4839-9106-ce99f45ccadf\") " pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.296265 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a7c44d-5824-4fbc-8f45-e421a2bc1b8b" path="/var/lib/kubelet/pods/01a7c44d-5824-4fbc-8f45-e421a2bc1b8b/volumes" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.396034 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:07 crc kubenswrapper[4959]: I0121 13:31:07.605875 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:08 crc kubenswrapper[4959]: I0121 13:31:08.347830 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cdb9766f-rtq4k"] Jan 21 13:31:08 crc kubenswrapper[4959]: W0121 13:31:08.352469 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3913238c_8062_4839_9106_ce99f45ccadf.slice/crio-e5136772f2a62c521ff962b4531092a2bc462c9a7d4a1e493e4e2bfb6cda9162 WatchSource:0}: Error finding container e5136772f2a62c521ff962b4531092a2bc462c9a7d4a1e493e4e2bfb6cda9162: Status 404 returned error can't find the container with id e5136772f2a62c521ff962b4531092a2bc462c9a7d4a1e493e4e2bfb6cda9162 Jan 21 13:31:08 crc kubenswrapper[4959]: I0121 13:31:08.617233 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cdb9766f-rtq4k" event={"ID":"3913238c-8062-4839-9106-ce99f45ccadf","Type":"ContainerStarted","Data":"43d2d398b0e3bc8f72317bb0545512f9be2b7c9afeee64ebf1cc6d4d0ce1a5ef"} Jan 21 13:31:08 crc kubenswrapper[4959]: I0121 13:31:08.617534 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cdb9766f-rtq4k" event={"ID":"3913238c-8062-4839-9106-ce99f45ccadf","Type":"ContainerStarted","Data":"e5136772f2a62c521ff962b4531092a2bc462c9a7d4a1e493e4e2bfb6cda9162"} Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.526787 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cf9fc5bf6-zcb4m" Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.629672 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cdb9766f-rtq4k" event={"ID":"3913238c-8062-4839-9106-ce99f45ccadf","Type":"ContainerStarted","Data":"84c6df9a92af6a6185a2025ee9545ce26463e4eb2d6d64516f4a3a712e777243"} Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.629919 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.660293 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77cdb9766f-rtq4k" podStartSLOduration=3.660277965 podStartE2EDuration="3.660277965s" podCreationTimestamp="2026-01-21 13:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:09.659633407 +0000 UTC m=+1330.622663950" watchObservedRunningTime="2026-01-21 13:31:09.660277965 +0000 UTC m=+1330.623308508" Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.793909 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.855915 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-765d4c965b-4xv4p" Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.938179 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5798b7b654-h8krc"] Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.938457 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5798b7b654-h8krc" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api-log" containerID="cri-o://735a14a0b072b9c7f6f38fa1e81fc4a539fca335a9710983f43afad3358ed91f" gracePeriod=30 Jan 21 13:31:09 crc kubenswrapper[4959]: I0121 13:31:09.938618 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5798b7b654-h8krc" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api" containerID="cri-o://4c615788b31a2791dc69fed78d27145d1829d92d2d27305e0ef86ee05ac22051" gracePeriod=30 Jan 21 13:31:10 crc kubenswrapper[4959]: I0121 13:31:10.640079 4959 generic.go:334] "Generic (PLEG): container finished" podID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerID="735a14a0b072b9c7f6f38fa1e81fc4a539fca335a9710983f43afad3358ed91f" exitCode=143 Jan 21 13:31:10 crc kubenswrapper[4959]: I0121 13:31:10.640208 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5798b7b654-h8krc" event={"ID":"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd","Type":"ContainerDied","Data":"735a14a0b072b9c7f6f38fa1e81fc4a539fca335a9710983f43afad3358ed91f"} Jan 21 13:31:11 crc kubenswrapper[4959]: I0121 13:31:11.139358 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 13:31:11 crc kubenswrapper[4959]: I0121 13:31:11.469012 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 13:31:11 crc kubenswrapper[4959]: I0121 13:31:11.689755 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.139478 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.140967 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.143305 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.143593 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.147800 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qrcj8" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.157004 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.224270 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blx6h\" (UniqueName: \"kubernetes.io/projected/778a7738-b71c-4f16-a695-4b6155aad41a-kube-api-access-blx6h\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.224616 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/778a7738-b71c-4f16-a695-4b6155aad41a-openstack-config-secret\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.224742 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/778a7738-b71c-4f16-a695-4b6155aad41a-openstack-config\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.224967 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778a7738-b71c-4f16-a695-4b6155aad41a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.326343 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blx6h\" (UniqueName: \"kubernetes.io/projected/778a7738-b71c-4f16-a695-4b6155aad41a-kube-api-access-blx6h\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.326694 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/778a7738-b71c-4f16-a695-4b6155aad41a-openstack-config-secret\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.326735 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/778a7738-b71c-4f16-a695-4b6155aad41a-openstack-config\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.326846 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778a7738-b71c-4f16-a695-4b6155aad41a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.328292 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/778a7738-b71c-4f16-a695-4b6155aad41a-openstack-config\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.335036 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778a7738-b71c-4f16-a695-4b6155aad41a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.340709 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/778a7738-b71c-4f16-a695-4b6155aad41a-openstack-config-secret\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.344058 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blx6h\" (UniqueName: \"kubernetes.io/projected/778a7738-b71c-4f16-a695-4b6155aad41a-kube-api-access-blx6h\") pod \"openstackclient\" (UID: \"778a7738-b71c-4f16-a695-4b6155aad41a\") " pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.459766 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.655271 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="cinder-scheduler" containerID="cri-o://6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5" gracePeriod=30 Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.656003 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="probe" containerID="cri-o://990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c" gracePeriod=30 Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.828286 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.882090 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5"] Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.882341 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerName="dnsmasq-dns" containerID="cri-o://c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8" gracePeriod=10 Jan 21 13:31:12 crc kubenswrapper[4959]: I0121 13:31:12.986072 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 13:31:12 crc kubenswrapper[4959]: W0121 13:31:12.996312 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod778a7738_b71c_4f16_a695_4b6155aad41a.slice/crio-870d663f29cb446f1d118a1e6dca128b58055c7657cc49aa206153e250c6e877 WatchSource:0}: Error finding container 870d663f29cb446f1d118a1e6dca128b58055c7657cc49aa206153e250c6e877: Status 404 returned error can't find the container with id 870d663f29cb446f1d118a1e6dca128b58055c7657cc49aa206153e250c6e877 Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.002300 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.173227 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5798b7b654-h8krc" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:40122->10.217.0.145:9311: read: connection reset by peer" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.173304 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5798b7b654-h8krc" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:40126->10.217.0.145:9311: read: connection reset by peer" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.538129 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.655393 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlr4p\" (UniqueName: \"kubernetes.io/projected/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-kube-api-access-vlr4p\") pod \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.655498 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-nb\") pod \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.655606 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-config\") pod \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.655645 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-sb\") pod \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.655666 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-dns-svc\") pod \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\" (UID: \"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad\") " Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.688953 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-kube-api-access-vlr4p" (OuterVolumeSpecName: "kube-api-access-vlr4p") pod "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" (UID: "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad"). InnerVolumeSpecName "kube-api-access-vlr4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.741317 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" (UID: "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.758513 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlr4p\" (UniqueName: \"kubernetes.io/projected/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-kube-api-access-vlr4p\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.758539 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.765478 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" (UID: "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.767020 4959 generic.go:334] "Generic (PLEG): container finished" podID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerID="4c615788b31a2791dc69fed78d27145d1829d92d2d27305e0ef86ee05ac22051" exitCode=0 Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.767080 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5798b7b654-h8krc" event={"ID":"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd","Type":"ContainerDied","Data":"4c615788b31a2791dc69fed78d27145d1829d92d2d27305e0ef86ee05ac22051"} Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.773935 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" (UID: "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.780444 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-config" (OuterVolumeSpecName: "config") pod "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" (UID: "7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.786826 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"778a7738-b71c-4f16-a695-4b6155aad41a","Type":"ContainerStarted","Data":"870d663f29cb446f1d118a1e6dca128b58055c7657cc49aa206153e250c6e877"} Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.800742 4959 generic.go:334] "Generic (PLEG): container finished" podID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerID="c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8" exitCode=0 Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.800973 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" event={"ID":"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad","Type":"ContainerDied","Data":"c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8"} Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.801243 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" event={"ID":"7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad","Type":"ContainerDied","Data":"522e254b45797834f0bc604611b3e76a9a78e41c0ef728e436d3b116e6100d2f"} Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.801034 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.801318 4959 scope.go:117] "RemoveContainer" containerID="c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.808040 4959 generic.go:334] "Generic (PLEG): container finished" podID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerID="6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5" exitCode=0 Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.808077 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fa33b1f-15fa-4938-b549-9ae362cd6918","Type":"ContainerDied","Data":"6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5"} Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.851598 4959 scope.go:117] "RemoveContainer" containerID="9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.854814 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5"] Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.859765 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.859927 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.860011 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.861665 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.862967 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-rdbt5"] Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.882373 4959 scope.go:117] "RemoveContainer" containerID="c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8" Jan 21 13:31:13 crc kubenswrapper[4959]: E0121 13:31:13.883583 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8\": container with ID starting with c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8 not found: ID does not exist" containerID="c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.883706 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8"} err="failed to get container status \"c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8\": rpc error: code = NotFound desc = could not find container \"c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8\": container with ID starting with c12f39239da6fde3026d0a57ffca88a9e147e78573ce8f52729d71b3f68b56c8 not found: ID does not exist" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.883802 4959 scope.go:117] "RemoveContainer" containerID="9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf" Jan 21 13:31:13 crc kubenswrapper[4959]: E0121 13:31:13.884297 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf\": container with ID starting with 9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf not found: ID does not exist" containerID="9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf" Jan 21 13:31:13 crc kubenswrapper[4959]: I0121 13:31:13.884649 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf"} err="failed to get container status \"9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf\": rpc error: code = NotFound desc = could not find container \"9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf\": container with ID starting with 9f217888113a9aecbd669188317f0ef74562e13b6313a17d1999ee581cfa8abf not found: ID does not exist" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.064271 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-combined-ca-bundle\") pod \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.064327 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data-custom\") pod \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.064410 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-logs\") pod \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.064457 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gkj\" (UniqueName: \"kubernetes.io/projected/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-kube-api-access-q9gkj\") pod \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.064561 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data\") pod \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\" (UID: \"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.065609 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.065912 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-logs" (OuterVolumeSpecName: "logs") pod "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" (UID: "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.069381 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" (UID: "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.072383 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-kube-api-access-q9gkj" (OuterVolumeSpecName: "kube-api-access-q9gkj") pod "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" (UID: "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd"). InnerVolumeSpecName "kube-api-access-q9gkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.118843 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" (UID: "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.161330 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data" (OuterVolumeSpecName: "config-data") pod "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" (UID: "e263bff8-6bc4-4f49-9355-f37d8fd1e7fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.168137 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.168170 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.168184 4959 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.168194 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.168201 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gkj\" (UniqueName: \"kubernetes.io/projected/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd-kube-api-access-q9gkj\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.433636 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.478598 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data-custom\") pod \"8fa33b1f-15fa-4938-b549-9ae362cd6918\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.478642 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-scripts\") pod \"8fa33b1f-15fa-4938-b549-9ae362cd6918\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.478701 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data\") pod \"8fa33b1f-15fa-4938-b549-9ae362cd6918\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.478766 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fa33b1f-15fa-4938-b549-9ae362cd6918-etc-machine-id\") pod \"8fa33b1f-15fa-4938-b549-9ae362cd6918\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.478806 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p4mw\" (UniqueName: \"kubernetes.io/projected/8fa33b1f-15fa-4938-b549-9ae362cd6918-kube-api-access-4p4mw\") pod \"8fa33b1f-15fa-4938-b549-9ae362cd6918\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.478823 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-combined-ca-bundle\") pod \"8fa33b1f-15fa-4938-b549-9ae362cd6918\" (UID: \"8fa33b1f-15fa-4938-b549-9ae362cd6918\") " Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.479313 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fa33b1f-15fa-4938-b549-9ae362cd6918-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8fa33b1f-15fa-4938-b549-9ae362cd6918" (UID: "8fa33b1f-15fa-4938-b549-9ae362cd6918"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.498638 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-scripts" (OuterVolumeSpecName: "scripts") pod "8fa33b1f-15fa-4938-b549-9ae362cd6918" (UID: "8fa33b1f-15fa-4938-b549-9ae362cd6918"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.499823 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8fa33b1f-15fa-4938-b549-9ae362cd6918" (UID: "8fa33b1f-15fa-4938-b549-9ae362cd6918"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.500105 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa33b1f-15fa-4938-b549-9ae362cd6918-kube-api-access-4p4mw" (OuterVolumeSpecName: "kube-api-access-4p4mw") pod "8fa33b1f-15fa-4938-b549-9ae362cd6918" (UID: "8fa33b1f-15fa-4938-b549-9ae362cd6918"). InnerVolumeSpecName "kube-api-access-4p4mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.566360 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fa33b1f-15fa-4938-b549-9ae362cd6918" (UID: "8fa33b1f-15fa-4938-b549-9ae362cd6918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.581575 4959 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fa33b1f-15fa-4938-b549-9ae362cd6918-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.581614 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p4mw\" (UniqueName: \"kubernetes.io/projected/8fa33b1f-15fa-4938-b549-9ae362cd6918-kube-api-access-4p4mw\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.581627 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.581638 4959 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.581649 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.614191 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data" (OuterVolumeSpecName: "config-data") pod "8fa33b1f-15fa-4938-b549-9ae362cd6918" (UID: "8fa33b1f-15fa-4938-b549-9ae362cd6918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.693079 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa33b1f-15fa-4938-b549-9ae362cd6918-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.823124 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5798b7b654-h8krc" event={"ID":"e263bff8-6bc4-4f49-9355-f37d8fd1e7fd","Type":"ContainerDied","Data":"40ddb0c89ee5952d84102316a0ee99b7fa43432674d580f0af4ab23410934b76"} Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.823168 4959 scope.go:117] "RemoveContainer" containerID="4c615788b31a2791dc69fed78d27145d1829d92d2d27305e0ef86ee05ac22051" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.823257 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5798b7b654-h8krc" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.834525 4959 generic.go:334] "Generic (PLEG): container finished" podID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerID="990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c" exitCode=0 Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.834585 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fa33b1f-15fa-4938-b549-9ae362cd6918","Type":"ContainerDied","Data":"990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c"} Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.834627 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8fa33b1f-15fa-4938-b549-9ae362cd6918","Type":"ContainerDied","Data":"e442b3d17a0aebb6a51bf733a2fbd9b923b8c1ac5bc4d1d576baa490b1cc597e"} Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.834709 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.867258 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5798b7b654-h8krc"] Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.869835 4959 scope.go:117] "RemoveContainer" containerID="735a14a0b072b9c7f6f38fa1e81fc4a539fca335a9710983f43afad3358ed91f" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.882623 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5798b7b654-h8krc"] Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.892607 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.902211 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914161 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:14 crc kubenswrapper[4959]: E0121 13:31:14.914558 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerName="init" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914576 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerName="init" Jan 21 13:31:14 crc kubenswrapper[4959]: E0121 13:31:14.914590 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerName="dnsmasq-dns" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914597 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerName="dnsmasq-dns" Jan 21 13:31:14 crc kubenswrapper[4959]: E0121 13:31:14.914621 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914627 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api" Jan 21 13:31:14 crc kubenswrapper[4959]: E0121 13:31:14.914640 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="probe" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914646 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="probe" Jan 21 13:31:14 crc kubenswrapper[4959]: E0121 13:31:14.914659 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="cinder-scheduler" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914665 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="cinder-scheduler" Jan 21 13:31:14 crc kubenswrapper[4959]: E0121 13:31:14.914678 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api-log" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914685 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api-log" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914833 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="cinder-scheduler" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914847 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api-log" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914856 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" containerName="dnsmasq-dns" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914863 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" containerName="barbican-api" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.914873 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" containerName="probe" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.915760 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.917963 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.918843 4959 scope.go:117] "RemoveContainer" containerID="990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.928404 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.964131 4959 scope.go:117] "RemoveContainer" containerID="6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.998087 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4abf280-9cc9-46a4-9948-67fdc4e551ab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.998146 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-scripts\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.998207 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-config-data\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.998244 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zdw\" (UniqueName: \"kubernetes.io/projected/b4abf280-9cc9-46a4-9948-67fdc4e551ab-kube-api-access-44zdw\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.998505 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:14 crc kubenswrapper[4959]: I0121 13:31:14.998559 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.002630 4959 scope.go:117] "RemoveContainer" containerID="990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c" Jan 21 13:31:15 crc kubenswrapper[4959]: E0121 13:31:15.003673 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c\": container with ID starting with 990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c not found: ID does not exist" containerID="990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.003713 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c"} err="failed to get container status \"990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c\": rpc error: code = NotFound desc = could not find container \"990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c\": container with ID starting with 990498553a622c21692a119471c611e2aa77c656c0a31f04830d5e6614553b5c not found: ID does not exist" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.003752 4959 scope.go:117] "RemoveContainer" containerID="6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5" Jan 21 13:31:15 crc kubenswrapper[4959]: E0121 13:31:15.004266 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5\": container with ID starting with 6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5 not found: ID does not exist" containerID="6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.004297 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5"} err="failed to get container status \"6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5\": rpc error: code = NotFound desc = could not find container \"6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5\": container with ID starting with 6547f2532338f48a6889a50daa362a10e93c5cbbb94a221c7b7af3fe5919d4a5 not found: ID does not exist" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100417 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100486 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4abf280-9cc9-46a4-9948-67fdc4e551ab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100513 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-scripts\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100587 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-config-data\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100639 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44zdw\" (UniqueName: \"kubernetes.io/projected/b4abf280-9cc9-46a4-9948-67fdc4e551ab-kube-api-access-44zdw\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100739 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.100956 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4abf280-9cc9-46a4-9948-67fdc4e551ab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.105644 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.105914 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-config-data\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.106121 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.113086 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4abf280-9cc9-46a4-9948-67fdc4e551ab-scripts\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.121440 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44zdw\" (UniqueName: \"kubernetes.io/projected/b4abf280-9cc9-46a4-9948-67fdc4e551ab-kube-api-access-44zdw\") pod \"cinder-scheduler-0\" (UID: \"b4abf280-9cc9-46a4-9948-67fdc4e551ab\") " pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.246578 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.304552 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad" path="/var/lib/kubelet/pods/7ea0146b-858a-48b3-98a6-dd4ecd7fa8ad/volumes" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.305769 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa33b1f-15fa-4938-b549-9ae362cd6918" path="/var/lib/kubelet/pods/8fa33b1f-15fa-4938-b549-9ae362cd6918/volumes" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.306859 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e263bff8-6bc4-4f49-9355-f37d8fd1e7fd" path="/var/lib/kubelet/pods/e263bff8-6bc4-4f49-9355-f37d8fd1e7fd/volumes" Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.784704 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.847368 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b4abf280-9cc9-46a4-9948-67fdc4e551ab","Type":"ContainerStarted","Data":"9585328f5e11c34629ae601e925f8979a2230410a01314bf1c5d5568eeb97391"} Jan 21 13:31:15 crc kubenswrapper[4959]: I0121 13:31:15.861818 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 13:31:16 crc kubenswrapper[4959]: I0121 13:31:16.865939 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b4abf280-9cc9-46a4-9948-67fdc4e551ab","Type":"ContainerStarted","Data":"9588130fe9d4ef3a1bf538bcb27a37a118de89e8078b1513c756679ddec2a8c2"} Jan 21 13:31:17 crc kubenswrapper[4959]: I0121 13:31:17.879654 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b4abf280-9cc9-46a4-9948-67fdc4e551ab","Type":"ContainerStarted","Data":"917081c5906187bbb87b2f98459a762221e4ac8866bf41cf94232cd888faa20f"} Jan 21 13:31:17 crc kubenswrapper[4959]: I0121 13:31:17.906371 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9063559530000003 podStartE2EDuration="3.906355953s" podCreationTimestamp="2026-01-21 13:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:17.904181703 +0000 UTC m=+1338.867212246" watchObservedRunningTime="2026-01-21 13:31:17.906355953 +0000 UTC m=+1338.869386496" Jan 21 13:31:20 crc kubenswrapper[4959]: I0121 13:31:20.247439 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 13:31:20 crc kubenswrapper[4959]: I0121 13:31:20.910326 4959 generic.go:334] "Generic (PLEG): container finished" podID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerID="9cc743a644d5af825c0234a52783e853d29a62846daa40376bbb917d33b60a1f" exitCode=137 Jan 21 13:31:20 crc kubenswrapper[4959]: I0121 13:31:20.910373 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerDied","Data":"9cc743a644d5af825c0234a52783e853d29a62846daa40376bbb917d33b60a1f"} Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.379584 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.379646 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.379688 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.380542 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a84446f54fbcdd9e945dd5ad114c0f8a1adc39825215bb3644cea7a3988b06e"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.380603 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://8a84446f54fbcdd9e945dd5ad114c0f8a1adc39825215bb3644cea7a3988b06e" gracePeriod=600 Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.926425 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="8a84446f54fbcdd9e945dd5ad114c0f8a1adc39825215bb3644cea7a3988b06e" exitCode=0 Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.926468 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"8a84446f54fbcdd9e945dd5ad114c0f8a1adc39825215bb3644cea7a3988b06e"} Jan 21 13:31:21 crc kubenswrapper[4959]: I0121 13:31:21.927002 4959 scope.go:117] "RemoveContainer" containerID="d241f95bdad8e099eb04c705c02b5632d266875f065692682f2eadc1b6776be6" Jan 21 13:31:25 crc kubenswrapper[4959]: I0121 13:31:25.535450 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 13:31:25 crc kubenswrapper[4959]: I0121 13:31:25.988802 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:31:26 crc kubenswrapper[4959]: I0121 13:31:26.016388 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cb6d76584-4n6sf" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.003933 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a"} Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.015380 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137364 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-scripts\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137468 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-sg-core-conf-yaml\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137494 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-config-data\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137520 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-run-httpd\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137554 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-combined-ca-bundle\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137630 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-log-httpd\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.137665 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q5p9\" (UniqueName: \"kubernetes.io/projected/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-kube-api-access-2q5p9\") pod \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\" (UID: \"a255ee80-1aff-4b3e-a129-5fb11a2edb1b\") " Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.138176 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.138189 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.138712 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.142903 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-kube-api-access-2q5p9" (OuterVolumeSpecName: "kube-api-access-2q5p9") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "kube-api-access-2q5p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.143956 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-scripts" (OuterVolumeSpecName: "scripts") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.167084 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.195279 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.240994 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.241304 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.241391 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q5p9\" (UniqueName: \"kubernetes.io/projected/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-kube-api-access-2q5p9\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.241470 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.241544 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.253426 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-config-data" (OuterVolumeSpecName: "config-data") pod "a255ee80-1aff-4b3e-a129-5fb11a2edb1b" (UID: "a255ee80-1aff-4b3e-a129-5fb11a2edb1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:27 crc kubenswrapper[4959]: I0121 13:31:27.344274 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a255ee80-1aff-4b3e-a129-5fb11a2edb1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.047574 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a255ee80-1aff-4b3e-a129-5fb11a2edb1b","Type":"ContainerDied","Data":"89ed8a9e55eb7c7ba2cf9713a36a0404bb77e8cf8c2679ccffeada8111f7bc86"} Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.048118 4959 scope.go:117] "RemoveContainer" containerID="9cc743a644d5af825c0234a52783e853d29a62846daa40376bbb917d33b60a1f" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.047845 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.052302 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"778a7738-b71c-4f16-a695-4b6155aad41a","Type":"ContainerStarted","Data":"8f388edb3fad0a0d0522a0d7e28798abc4326703a2f2a607b4eb7635f1c65776"} Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.083720 4959 scope.go:117] "RemoveContainer" containerID="366378b663f14f9c840bf2894f5baea947412252098c7d4766037beb252dbb10" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.098277 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.109047 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.115666 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.346100063 podStartE2EDuration="16.115646945s" podCreationTimestamp="2026-01-21 13:31:12 +0000 UTC" firstStartedPulling="2026-01-21 13:31:13.001643462 +0000 UTC m=+1333.964674005" lastFinishedPulling="2026-01-21 13:31:26.771190334 +0000 UTC m=+1347.734220887" observedRunningTime="2026-01-21 13:31:28.114796891 +0000 UTC m=+1349.077827454" watchObservedRunningTime="2026-01-21 13:31:28.115646945 +0000 UTC m=+1349.078677498" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.118472 4959 scope.go:117] "RemoveContainer" containerID="af2cc15ba9e953f809168b2e7b6230721e4fe72b1eb8e272e8e2ef444e1bd5c0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.137450 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:28 crc kubenswrapper[4959]: E0121 13:31:28.137805 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="proxy-httpd" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.137816 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="proxy-httpd" Jan 21 13:31:28 crc kubenswrapper[4959]: E0121 13:31:28.137829 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="sg-core" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.137836 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="sg-core" Jan 21 13:31:28 crc kubenswrapper[4959]: E0121 13:31:28.137855 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="ceilometer-notification-agent" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.137860 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="ceilometer-notification-agent" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.138002 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="ceilometer-notification-agent" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.138014 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="sg-core" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.138028 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" containerName="proxy-httpd" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.139454 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.141857 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.142052 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.142328 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.271950 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-scripts\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.272001 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.272031 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.272050 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-config-data\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.272067 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-run-httpd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.272082 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n44hd\" (UniqueName: \"kubernetes.io/projected/03001848-e9d0-42e5-8b95-2d2fd6d948c7-kube-api-access-n44hd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.272149 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-log-httpd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.326217 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:28 crc kubenswrapper[4959]: E0121 13:31:28.327148 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-n44hd log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="03001848-e9d0-42e5-8b95-2d2fd6d948c7" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.373957 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-scripts\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374006 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374033 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374063 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-config-data\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374088 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-run-httpd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374118 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n44hd\" (UniqueName: \"kubernetes.io/projected/03001848-e9d0-42e5-8b95-2d2fd6d948c7-kube-api-access-n44hd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374174 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-log-httpd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.374626 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-log-httpd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.377179 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-run-httpd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.381778 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.382727 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-scripts\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.398304 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.398896 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n44hd\" (UniqueName: \"kubernetes.io/projected/03001848-e9d0-42e5-8b95-2d2fd6d948c7-kube-api-access-n44hd\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:28 crc kubenswrapper[4959]: I0121 13:31:28.401024 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-config-data\") pod \"ceilometer-0\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " pod="openstack/ceilometer-0" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.060663 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.077417 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.186415 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-config-data\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.186464 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-scripts\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.186541 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-combined-ca-bundle\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.186896 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-run-httpd\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.186973 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-log-httpd\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.187009 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-sg-core-conf-yaml\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.187392 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n44hd\" (UniqueName: \"kubernetes.io/projected/03001848-e9d0-42e5-8b95-2d2fd6d948c7-kube-api-access-n44hd\") pod \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\" (UID: \"03001848-e9d0-42e5-8b95-2d2fd6d948c7\") " Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.189934 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.190184 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.193736 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-scripts" (OuterVolumeSpecName: "scripts") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.194498 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03001848-e9d0-42e5-8b95-2d2fd6d948c7-kube-api-access-n44hd" (OuterVolumeSpecName: "kube-api-access-n44hd") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "kube-api-access-n44hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.194625 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.195682 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-config-data" (OuterVolumeSpecName: "config-data") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.205237 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03001848-e9d0-42e5-8b95-2d2fd6d948c7" (UID: "03001848-e9d0-42e5-8b95-2d2fd6d948c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.288964 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.288998 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03001848-e9d0-42e5-8b95-2d2fd6d948c7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.289007 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.289017 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n44hd\" (UniqueName: \"kubernetes.io/projected/03001848-e9d0-42e5-8b95-2d2fd6d948c7-kube-api-access-n44hd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.289025 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.289032 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.289040 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03001848-e9d0-42e5-8b95-2d2fd6d948c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:29 crc kubenswrapper[4959]: I0121 13:31:29.296693 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a255ee80-1aff-4b3e-a129-5fb11a2edb1b" path="/var/lib/kubelet/pods/a255ee80-1aff-4b3e-a129-5fb11a2edb1b/volumes" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.068600 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.124595 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.139802 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.168012 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.184973 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.186330 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.188371 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.188569 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321243 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321311 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-config-data\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321335 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glk92\" (UniqueName: \"kubernetes.io/projected/690084a6-caff-452d-a954-c15ae02e4630-kube-api-access-glk92\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321366 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-scripts\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321382 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321433 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-run-httpd\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.321472 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-log-httpd\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.422830 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-run-httpd\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.422912 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-log-httpd\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.422981 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.423023 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-config-data\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.423054 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glk92\" (UniqueName: \"kubernetes.io/projected/690084a6-caff-452d-a954-c15ae02e4630-kube-api-access-glk92\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.423120 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-scripts\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.423144 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.423477 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-run-httpd\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.424497 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-log-httpd\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.428491 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.429211 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-scripts\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.429225 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-config-data\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.430081 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.440765 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glk92\" (UniqueName: \"kubernetes.io/projected/690084a6-caff-452d-a954-c15ae02e4630-kube-api-access-glk92\") pod \"ceilometer-0\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.504645 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:30 crc kubenswrapper[4959]: I0121 13:31:30.868067 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:30 crc kubenswrapper[4959]: W0121 13:31:30.890504 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod690084a6_caff_452d_a954_c15ae02e4630.slice/crio-4c1df1ee1c9405df8ff17f2226377f0cfe38946e27d53a087d5f121eb8306bba WatchSource:0}: Error finding container 4c1df1ee1c9405df8ff17f2226377f0cfe38946e27d53a087d5f121eb8306bba: Status 404 returned error can't find the container with id 4c1df1ee1c9405df8ff17f2226377f0cfe38946e27d53a087d5f121eb8306bba Jan 21 13:31:31 crc kubenswrapper[4959]: I0121 13:31:31.076441 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerStarted","Data":"4c1df1ee1c9405df8ff17f2226377f0cfe38946e27d53a087d5f121eb8306bba"} Jan 21 13:31:31 crc kubenswrapper[4959]: I0121 13:31:31.297234 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03001848-e9d0-42e5-8b95-2d2fd6d948c7" path="/var/lib/kubelet/pods/03001848-e9d0-42e5-8b95-2d2fd6d948c7/volumes" Jan 21 13:31:33 crc kubenswrapper[4959]: I0121 13:31:33.092484 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerStarted","Data":"60105a19603295e31b24d0136a87215c08c3bd1621617d3cec8a25ee8ba7acf7"} Jan 21 13:31:33 crc kubenswrapper[4959]: I0121 13:31:33.095539 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:34 crc kubenswrapper[4959]: I0121 13:31:34.104238 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerStarted","Data":"bf11820cc7ed98403d34cfab2d8151f0b4f3a8abe3dd0e4855b0195d682b1cbf"} Jan 21 13:31:34 crc kubenswrapper[4959]: I0121 13:31:34.105122 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerStarted","Data":"3c4f62d5ba8035d2c72068327a54191e4d48991b853baa4603e6fa13db3be135"} Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.105681 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.113056 4959 generic.go:334] "Generic (PLEG): container finished" podID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerID="a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef" exitCode=137 Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.113182 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.113186 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9728ce3e-ab6e-43dc-8860-f14875ce3f71","Type":"ContainerDied","Data":"a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef"} Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.113319 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9728ce3e-ab6e-43dc-8860-f14875ce3f71","Type":"ContainerDied","Data":"42ac888e9d7ca8b69029b82c85f24da9cc615fe0fb5b4156251a074703cea808"} Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.113337 4959 scope.go:117] "RemoveContainer" containerID="a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.135747 4959 scope.go:117] "RemoveContainer" containerID="35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.164804 4959 scope.go:117] "RemoveContainer" containerID="a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef" Jan 21 13:31:35 crc kubenswrapper[4959]: E0121 13:31:35.165511 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef\": container with ID starting with a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef not found: ID does not exist" containerID="a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.165568 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef"} err="failed to get container status \"a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef\": rpc error: code = NotFound desc = could not find container \"a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef\": container with ID starting with a2ce93efd16b15d784a4cbed93b8f9722c48f7d8fc6aebce3d9cecfdd79136ef not found: ID does not exist" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.165601 4959 scope.go:117] "RemoveContainer" containerID="35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f" Jan 21 13:31:35 crc kubenswrapper[4959]: E0121 13:31:35.165991 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f\": container with ID starting with 35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f not found: ID does not exist" containerID="35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.166018 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f"} err="failed to get container status \"35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f\": rpc error: code = NotFound desc = could not find container \"35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f\": container with ID starting with 35fdfaa5c37ed0cc6d41301ec85613629d3722876c08dda37e30d965244bd75f not found: ID does not exist" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.200995 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9728ce3e-ab6e-43dc-8860-f14875ce3f71-etc-machine-id\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201148 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data-custom\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201224 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-combined-ca-bundle\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201219 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9728ce3e-ab6e-43dc-8860-f14875ce3f71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201250 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201316 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhgnt\" (UniqueName: \"kubernetes.io/projected/9728ce3e-ab6e-43dc-8860-f14875ce3f71-kube-api-access-qhgnt\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201450 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9728ce3e-ab6e-43dc-8860-f14875ce3f71-logs\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.201499 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-scripts\") pod \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\" (UID: \"9728ce3e-ab6e-43dc-8860-f14875ce3f71\") " Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.202022 4959 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9728ce3e-ab6e-43dc-8860-f14875ce3f71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.213875 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9728ce3e-ab6e-43dc-8860-f14875ce3f71-logs" (OuterVolumeSpecName: "logs") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.220032 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.227753 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9728ce3e-ab6e-43dc-8860-f14875ce3f71-kube-api-access-qhgnt" (OuterVolumeSpecName: "kube-api-access-qhgnt") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "kube-api-access-qhgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.229606 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-scripts" (OuterVolumeSpecName: "scripts") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.243925 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.291506 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data" (OuterVolumeSpecName: "config-data") pod "9728ce3e-ab6e-43dc-8860-f14875ce3f71" (UID: "9728ce3e-ab6e-43dc-8860-f14875ce3f71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.303172 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9728ce3e-ab6e-43dc-8860-f14875ce3f71-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.303202 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.303212 4959 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.303222 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.303232 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9728ce3e-ab6e-43dc-8860-f14875ce3f71-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.303240 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhgnt\" (UniqueName: \"kubernetes.io/projected/9728ce3e-ab6e-43dc-8860-f14875ce3f71-kube-api-access-qhgnt\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.440727 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.451755 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.469025 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:35 crc kubenswrapper[4959]: E0121 13:31:35.469405 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api-log" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.469424 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api-log" Jan 21 13:31:35 crc kubenswrapper[4959]: E0121 13:31:35.469442 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.469450 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.469605 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.469620 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" containerName="cinder-api-log" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.470483 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.482925 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.483256 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.483452 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.493494 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609342 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e6780a-4202-47ad-a81d-3b4e23e96da4-logs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609402 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpsk\" (UniqueName: \"kubernetes.io/projected/a1e6780a-4202-47ad-a81d-3b4e23e96da4-kube-api-access-ncpsk\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609433 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609458 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609483 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609509 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1e6780a-4202-47ad-a81d-3b4e23e96da4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609580 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-scripts\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609617 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.609694 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-config-data\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710837 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e6780a-4202-47ad-a81d-3b4e23e96da4-logs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710882 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpsk\" (UniqueName: \"kubernetes.io/projected/a1e6780a-4202-47ad-a81d-3b4e23e96da4-kube-api-access-ncpsk\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710903 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710921 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710935 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710956 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1e6780a-4202-47ad-a81d-3b4e23e96da4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710975 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-scripts\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.710997 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.711046 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-config-data\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.711456 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1e6780a-4202-47ad-a81d-3b4e23e96da4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.711827 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e6780a-4202-47ad-a81d-3b4e23e96da4-logs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.715253 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.715683 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.716570 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.717634 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-scripts\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.717973 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.730138 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e6780a-4202-47ad-a81d-3b4e23e96da4-config-data\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.734596 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpsk\" (UniqueName: \"kubernetes.io/projected/a1e6780a-4202-47ad-a81d-3b4e23e96da4-kube-api-access-ncpsk\") pod \"cinder-api-0\" (UID: \"a1e6780a-4202-47ad-a81d-3b4e23e96da4\") " pod="openstack/cinder-api-0" Jan 21 13:31:35 crc kubenswrapper[4959]: I0121 13:31:35.805487 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:31:36 crc kubenswrapper[4959]: I0121 13:31:36.124499 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerStarted","Data":"6f56825cfd5a733c2fa805de3f3816e4baf818a6746dbd9e871fe4fb98238fc7"} Jan 21 13:31:36 crc kubenswrapper[4959]: I0121 13:31:36.124908 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:31:36 crc kubenswrapper[4959]: I0121 13:31:36.150519 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.833419672 podStartE2EDuration="6.150497092s" podCreationTimestamp="2026-01-21 13:31:30 +0000 UTC" firstStartedPulling="2026-01-21 13:31:30.892960383 +0000 UTC m=+1351.855990926" lastFinishedPulling="2026-01-21 13:31:35.210037813 +0000 UTC m=+1356.173068346" observedRunningTime="2026-01-21 13:31:36.142061589 +0000 UTC m=+1357.105092142" watchObservedRunningTime="2026-01-21 13:31:36.150497092 +0000 UTC m=+1357.113527635" Jan 21 13:31:36 crc kubenswrapper[4959]: I0121 13:31:36.264286 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:31:36 crc kubenswrapper[4959]: W0121 13:31:36.266470 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1e6780a_4202_47ad_a81d_3b4e23e96da4.slice/crio-91f5197e85f2a60fcde03754fd53ea837e49e5b00b552b1a95daefaa26d6094c WatchSource:0}: Error finding container 91f5197e85f2a60fcde03754fd53ea837e49e5b00b552b1a95daefaa26d6094c: Status 404 returned error can't find the container with id 91f5197e85f2a60fcde03754fd53ea837e49e5b00b552b1a95daefaa26d6094c Jan 21 13:31:36 crc kubenswrapper[4959]: I0121 13:31:36.895033 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.141593 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1e6780a-4202-47ad-a81d-3b4e23e96da4","Type":"ContainerStarted","Data":"aab3b6bd830bd1508ea1d42012490e05110c40f5a8eb6d6166faf49df6b5a29d"} Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.141871 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1e6780a-4202-47ad-a81d-3b4e23e96da4","Type":"ContainerStarted","Data":"91f5197e85f2a60fcde03754fd53ea837e49e5b00b552b1a95daefaa26d6094c"} Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.299025 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9728ce3e-ab6e-43dc-8860-f14875ce3f71" path="/var/lib/kubelet/pods/9728ce3e-ab6e-43dc-8860-f14875ce3f71/volumes" Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.415337 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77cdb9766f-rtq4k" Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.542960 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d8c7687b4-bsf2j"] Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.543302 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d8c7687b4-bsf2j" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-api" containerID="cri-o://f3dc89d0c9f62f630cae2a945c9dd4602ffadce1b4d490879f863549fc28ad34" gracePeriod=30 Jan 21 13:31:37 crc kubenswrapper[4959]: I0121 13:31:37.543808 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d8c7687b4-bsf2j" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-httpd" containerID="cri-o://cd03d59c2a16a039669d3604509298a6e18d85b9c654dd979b8c3bf08234277e" gracePeriod=30 Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.152321 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1e6780a-4202-47ad-a81d-3b4e23e96da4","Type":"ContainerStarted","Data":"daf7f3dd6cdc820498b65ee009e333faf12d74d2ac04f7541b53a85b8ca3c705"} Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.152988 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.155548 4959 generic.go:334] "Generic (PLEG): container finished" podID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerID="cd03d59c2a16a039669d3604509298a6e18d85b9c654dd979b8c3bf08234277e" exitCode=0 Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.155629 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8c7687b4-bsf2j" event={"ID":"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54","Type":"ContainerDied","Data":"cd03d59c2a16a039669d3604509298a6e18d85b9c654dd979b8c3bf08234277e"} Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.155785 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-central-agent" containerID="cri-o://60105a19603295e31b24d0136a87215c08c3bd1621617d3cec8a25ee8ba7acf7" gracePeriod=30 Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.155802 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="proxy-httpd" containerID="cri-o://6f56825cfd5a733c2fa805de3f3816e4baf818a6746dbd9e871fe4fb98238fc7" gracePeriod=30 Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.155844 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="sg-core" containerID="cri-o://bf11820cc7ed98403d34cfab2d8151f0b4f3a8abe3dd0e4855b0195d682b1cbf" gracePeriod=30 Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.155871 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-notification-agent" containerID="cri-o://3c4f62d5ba8035d2c72068327a54191e4d48991b853baa4603e6fa13db3be135" gracePeriod=30 Jan 21 13:31:38 crc kubenswrapper[4959]: I0121 13:31:38.189790 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.18977032 podStartE2EDuration="3.18977032s" podCreationTimestamp="2026-01-21 13:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:38.184202346 +0000 UTC m=+1359.147232889" watchObservedRunningTime="2026-01-21 13:31:38.18977032 +0000 UTC m=+1359.152800873" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.170406 4959 generic.go:334] "Generic (PLEG): container finished" podID="690084a6-caff-452d-a954-c15ae02e4630" containerID="6f56825cfd5a733c2fa805de3f3816e4baf818a6746dbd9e871fe4fb98238fc7" exitCode=0 Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.170828 4959 generic.go:334] "Generic (PLEG): container finished" podID="690084a6-caff-452d-a954-c15ae02e4630" containerID="bf11820cc7ed98403d34cfab2d8151f0b4f3a8abe3dd0e4855b0195d682b1cbf" exitCode=2 Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.170839 4959 generic.go:334] "Generic (PLEG): container finished" podID="690084a6-caff-452d-a954-c15ae02e4630" containerID="3c4f62d5ba8035d2c72068327a54191e4d48991b853baa4603e6fa13db3be135" exitCode=0 Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.170846 4959 generic.go:334] "Generic (PLEG): container finished" podID="690084a6-caff-452d-a954-c15ae02e4630" containerID="60105a19603295e31b24d0136a87215c08c3bd1621617d3cec8a25ee8ba7acf7" exitCode=0 Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.170659 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerDied","Data":"6f56825cfd5a733c2fa805de3f3816e4baf818a6746dbd9e871fe4fb98238fc7"} Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.171150 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerDied","Data":"bf11820cc7ed98403d34cfab2d8151f0b4f3a8abe3dd0e4855b0195d682b1cbf"} Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.171171 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerDied","Data":"3c4f62d5ba8035d2c72068327a54191e4d48991b853baa4603e6fa13db3be135"} Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.171186 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerDied","Data":"60105a19603295e31b24d0136a87215c08c3bd1621617d3cec8a25ee8ba7acf7"} Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.455753 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575073 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-scripts\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575524 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-config-data\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575551 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-run-httpd\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575582 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-combined-ca-bundle\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575689 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-sg-core-conf-yaml\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575728 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-log-httpd\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.575775 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glk92\" (UniqueName: \"kubernetes.io/projected/690084a6-caff-452d-a954-c15ae02e4630-kube-api-access-glk92\") pod \"690084a6-caff-452d-a954-c15ae02e4630\" (UID: \"690084a6-caff-452d-a954-c15ae02e4630\") " Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.577602 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.579782 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.583140 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-scripts" (OuterVolumeSpecName: "scripts") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.583284 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690084a6-caff-452d-a954-c15ae02e4630-kube-api-access-glk92" (OuterVolumeSpecName: "kube-api-access-glk92") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "kube-api-access-glk92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.606202 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.656146 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.687384 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.687532 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.687551 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glk92\" (UniqueName: \"kubernetes.io/projected/690084a6-caff-452d-a954-c15ae02e4630-kube-api-access-glk92\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.687565 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.687578 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/690084a6-caff-452d-a954-c15ae02e4630-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.687590 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.697409 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-config-data" (OuterVolumeSpecName: "config-data") pod "690084a6-caff-452d-a954-c15ae02e4630" (UID: "690084a6-caff-452d-a954-c15ae02e4630"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:39 crc kubenswrapper[4959]: I0121 13:31:39.789656 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/690084a6-caff-452d-a954-c15ae02e4630-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.180598 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"690084a6-caff-452d-a954-c15ae02e4630","Type":"ContainerDied","Data":"4c1df1ee1c9405df8ff17f2226377f0cfe38946e27d53a087d5f121eb8306bba"} Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.180661 4959 scope.go:117] "RemoveContainer" containerID="6f56825cfd5a733c2fa805de3f3816e4baf818a6746dbd9e871fe4fb98238fc7" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.180664 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.203266 4959 scope.go:117] "RemoveContainer" containerID="bf11820cc7ed98403d34cfab2d8151f0b4f3a8abe3dd0e4855b0195d682b1cbf" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.216306 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.225301 4959 scope.go:117] "RemoveContainer" containerID="3c4f62d5ba8035d2c72068327a54191e4d48991b853baa4603e6fa13db3be135" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.229523 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.242798 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.243405 4959 scope.go:117] "RemoveContainer" containerID="60105a19603295e31b24d0136a87215c08c3bd1621617d3cec8a25ee8ba7acf7" Jan 21 13:31:40 crc kubenswrapper[4959]: E0121 13:31:40.243758 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-notification-agent" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.243830 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-notification-agent" Jan 21 13:31:40 crc kubenswrapper[4959]: E0121 13:31:40.243886 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="sg-core" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.243975 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="sg-core" Jan 21 13:31:40 crc kubenswrapper[4959]: E0121 13:31:40.244059 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="proxy-httpd" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.244129 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="proxy-httpd" Jan 21 13:31:40 crc kubenswrapper[4959]: E0121 13:31:40.244199 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-central-agent" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.244253 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-central-agent" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.244464 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="sg-core" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.244525 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="proxy-httpd" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.244590 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-notification-agent" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.244662 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="690084a6-caff-452d-a954-c15ae02e4630" containerName="ceilometer-central-agent" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.247334 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.253095 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.253439 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.259557 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.401817 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-config-data\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.401931 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.401953 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gs5\" (UniqueName: \"kubernetes.io/projected/747d303f-591f-4630-aba1-9a53e6d2f515-kube-api-access-89gs5\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.401974 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-log-httpd\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.402166 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-run-httpd\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.402231 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-scripts\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.402264 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504170 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504221 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gs5\" (UniqueName: \"kubernetes.io/projected/747d303f-591f-4630-aba1-9a53e6d2f515-kube-api-access-89gs5\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504250 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-log-httpd\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504296 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-run-httpd\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504314 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-scripts\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504332 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.504372 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-config-data\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.505259 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-log-httpd\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.505563 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-run-httpd\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.508334 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-scripts\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.509745 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.514649 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-config-data\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.523354 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.523360 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gs5\" (UniqueName: \"kubernetes.io/projected/747d303f-591f-4630-aba1-9a53e6d2f515-kube-api-access-89gs5\") pod \"ceilometer-0\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " pod="openstack/ceilometer-0" Jan 21 13:31:40 crc kubenswrapper[4959]: I0121 13:31:40.571801 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:41 crc kubenswrapper[4959]: I0121 13:31:41.067078 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:41 crc kubenswrapper[4959]: W0121 13:31:41.068874 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747d303f_591f_4630_aba1_9a53e6d2f515.slice/crio-49a4873da9a48a435cbc6add16a82f938160e0c2a6dcdde17e959c4641e8ef0b WatchSource:0}: Error finding container 49a4873da9a48a435cbc6add16a82f938160e0c2a6dcdde17e959c4641e8ef0b: Status 404 returned error can't find the container with id 49a4873da9a48a435cbc6add16a82f938160e0c2a6dcdde17e959c4641e8ef0b Jan 21 13:31:41 crc kubenswrapper[4959]: I0121 13:31:41.192184 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerStarted","Data":"49a4873da9a48a435cbc6add16a82f938160e0c2a6dcdde17e959c4641e8ef0b"} Jan 21 13:31:41 crc kubenswrapper[4959]: I0121 13:31:41.295247 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690084a6-caff-452d-a954-c15ae02e4630" path="/var/lib/kubelet/pods/690084a6-caff-452d-a954-c15ae02e4630/volumes" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.202176 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerStarted","Data":"ea9b601f18215ecbf86794987508d57c34f25374df87db4df8cb04432135f650"} Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.624542 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mtgkn"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.628641 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.644192 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mtgkn"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.716021 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7r8zr"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.717307 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.727520 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7r8zr"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.746430 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwlp5\" (UniqueName: \"kubernetes.io/projected/bdc282d4-ce0c-40cc-be72-407c31c0effe-kube-api-access-wwlp5\") pod \"nova-api-db-create-mtgkn\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.746478 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdc282d4-ce0c-40cc-be72-407c31c0effe-operator-scripts\") pod \"nova-api-db-create-mtgkn\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.819395 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ca0c-account-create-update-58lql"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.820415 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.824218 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.844434 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ca0c-account-create-update-58lql"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.849876 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-operator-scripts\") pod \"nova-cell0-db-create-7r8zr\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.849963 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwlp5\" (UniqueName: \"kubernetes.io/projected/bdc282d4-ce0c-40cc-be72-407c31c0effe-kube-api-access-wwlp5\") pod \"nova-api-db-create-mtgkn\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.849991 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdc282d4-ce0c-40cc-be72-407c31c0effe-operator-scripts\") pod \"nova-api-db-create-mtgkn\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.850019 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txl5\" (UniqueName: \"kubernetes.io/projected/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-kube-api-access-2txl5\") pod \"nova-cell0-db-create-7r8zr\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.850840 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdc282d4-ce0c-40cc-be72-407c31c0effe-operator-scripts\") pod \"nova-api-db-create-mtgkn\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.867524 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwlp5\" (UniqueName: \"kubernetes.io/projected/bdc282d4-ce0c-40cc-be72-407c31c0effe-kube-api-access-wwlp5\") pod \"nova-api-db-create-mtgkn\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.925500 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bmtxq"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.926938 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.949180 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bmtxq"] Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.956830 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txl5\" (UniqueName: \"kubernetes.io/projected/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-kube-api-access-2txl5\") pod \"nova-cell0-db-create-7r8zr\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.957846 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff28e6a-1072-4f12-8b90-e62346cdcf59-operator-scripts\") pod \"nova-api-ca0c-account-create-update-58lql\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.957979 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6z8d\" (UniqueName: \"kubernetes.io/projected/5ff28e6a-1072-4f12-8b90-e62346cdcf59-kube-api-access-t6z8d\") pod \"nova-api-ca0c-account-create-update-58lql\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.958018 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-operator-scripts\") pod \"nova-cell0-db-create-7r8zr\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.958865 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-operator-scripts\") pod \"nova-cell0-db-create-7r8zr\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:42 crc kubenswrapper[4959]: I0121 13:31:42.977354 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txl5\" (UniqueName: \"kubernetes.io/projected/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-kube-api-access-2txl5\") pod \"nova-cell0-db-create-7r8zr\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.019996 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.026715 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-54c0-account-create-update-6sn5l"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.027863 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.029900 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.049326 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.056866 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-54c0-account-create-update-6sn5l"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.061297 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff28e6a-1072-4f12-8b90-e62346cdcf59-operator-scripts\") pod \"nova-api-ca0c-account-create-update-58lql\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.061367 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-operator-scripts\") pod \"nova-cell1-db-create-bmtxq\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.061419 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zklvn\" (UniqueName: \"kubernetes.io/projected/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-kube-api-access-zklvn\") pod \"nova-cell1-db-create-bmtxq\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.061441 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6z8d\" (UniqueName: \"kubernetes.io/projected/5ff28e6a-1072-4f12-8b90-e62346cdcf59-kube-api-access-t6z8d\") pod \"nova-api-ca0c-account-create-update-58lql\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.062590 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff28e6a-1072-4f12-8b90-e62346cdcf59-operator-scripts\") pod \"nova-api-ca0c-account-create-update-58lql\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.087574 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6z8d\" (UniqueName: \"kubernetes.io/projected/5ff28e6a-1072-4f12-8b90-e62346cdcf59-kube-api-access-t6z8d\") pod \"nova-api-ca0c-account-create-update-58lql\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.140846 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.162662 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-operator-scripts\") pod \"nova-cell1-db-create-bmtxq\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.162906 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zklvn\" (UniqueName: \"kubernetes.io/projected/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-kube-api-access-zklvn\") pod \"nova-cell1-db-create-bmtxq\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.162976 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9485d7a3-a28a-47b1-b949-197adac8d89c-operator-scripts\") pod \"nova-cell0-54c0-account-create-update-6sn5l\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.163008 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbfn\" (UniqueName: \"kubernetes.io/projected/9485d7a3-a28a-47b1-b949-197adac8d89c-kube-api-access-rtbfn\") pod \"nova-cell0-54c0-account-create-update-6sn5l\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.163655 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-operator-scripts\") pod \"nova-cell1-db-create-bmtxq\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.183047 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zklvn\" (UniqueName: \"kubernetes.io/projected/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-kube-api-access-zklvn\") pod \"nova-cell1-db-create-bmtxq\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.261965 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.268920 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9485d7a3-a28a-47b1-b949-197adac8d89c-operator-scripts\") pod \"nova-cell0-54c0-account-create-update-6sn5l\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.268990 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtbfn\" (UniqueName: \"kubernetes.io/projected/9485d7a3-a28a-47b1-b949-197adac8d89c-kube-api-access-rtbfn\") pod \"nova-cell0-54c0-account-create-update-6sn5l\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.269973 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9485d7a3-a28a-47b1-b949-197adac8d89c-operator-scripts\") pod \"nova-cell0-54c0-account-create-update-6sn5l\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.277390 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-36a4-account-create-update-czbvk"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.279604 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerStarted","Data":"bc51e0f32efee35699f42a04bd0213fb4a363464e46205be5ed59ed6d1f89ad9"} Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.279726 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.288309 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.317131 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtbfn\" (UniqueName: \"kubernetes.io/projected/9485d7a3-a28a-47b1-b949-197adac8d89c-kube-api-access-rtbfn\") pod \"nova-cell0-54c0-account-create-update-6sn5l\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.328791 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-36a4-account-create-update-czbvk"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.364569 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.371119 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjsg\" (UniqueName: \"kubernetes.io/projected/2f15817e-88bb-4e43-afd3-024e73dd60f5-kube-api-access-lbjsg\") pod \"nova-cell1-36a4-account-create-update-czbvk\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.371161 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f15817e-88bb-4e43-afd3-024e73dd60f5-operator-scripts\") pod \"nova-cell1-36a4-account-create-update-czbvk\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.473263 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbjsg\" (UniqueName: \"kubernetes.io/projected/2f15817e-88bb-4e43-afd3-024e73dd60f5-kube-api-access-lbjsg\") pod \"nova-cell1-36a4-account-create-update-czbvk\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.473329 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f15817e-88bb-4e43-afd3-024e73dd60f5-operator-scripts\") pod \"nova-cell1-36a4-account-create-update-czbvk\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.474322 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f15817e-88bb-4e43-afd3-024e73dd60f5-operator-scripts\") pod \"nova-cell1-36a4-account-create-update-czbvk\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.497618 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbjsg\" (UniqueName: \"kubernetes.io/projected/2f15817e-88bb-4e43-afd3-024e73dd60f5-kube-api-access-lbjsg\") pod \"nova-cell1-36a4-account-create-update-czbvk\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.604691 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mtgkn"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.632667 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.736881 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7r8zr"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.899823 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bmtxq"] Jan 21 13:31:43 crc kubenswrapper[4959]: I0121 13:31:43.942029 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ca0c-account-create-update-58lql"] Jan 21 13:31:43 crc kubenswrapper[4959]: W0121 13:31:43.946388 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod458c6ea3_bcb1_4e4b_8a2a_0fc1af49a8ae.slice/crio-bc172ec9218bd5320671385a510a8f4bfc4f02d3f1cd228c58dd5d2e17ed9c80 WatchSource:0}: Error finding container bc172ec9218bd5320671385a510a8f4bfc4f02d3f1cd228c58dd5d2e17ed9c80: Status 404 returned error can't find the container with id bc172ec9218bd5320671385a510a8f4bfc4f02d3f1cd228c58dd5d2e17ed9c80 Jan 21 13:31:43 crc kubenswrapper[4959]: W0121 13:31:43.993563 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ff28e6a_1072_4f12_8b90_e62346cdcf59.slice/crio-c9886946b110171f5841fba9e99bf53c4371f9f234eb3f9f04a9526701119715 WatchSource:0}: Error finding container c9886946b110171f5841fba9e99bf53c4371f9f234eb3f9f04a9526701119715: Status 404 returned error can't find the container with id c9886946b110171f5841fba9e99bf53c4371f9f234eb3f9f04a9526701119715 Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.147942 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-54c0-account-create-update-6sn5l"] Jan 21 13:31:44 crc kubenswrapper[4959]: W0121 13:31:44.159157 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9485d7a3_a28a_47b1_b949_197adac8d89c.slice/crio-43038e1d0bc5d2cd0e2bad2d7fdf3b62a330e9e5e2e905628981bb3817f482f2 WatchSource:0}: Error finding container 43038e1d0bc5d2cd0e2bad2d7fdf3b62a330e9e5e2e905628981bb3817f482f2: Status 404 returned error can't find the container with id 43038e1d0bc5d2cd0e2bad2d7fdf3b62a330e9e5e2e905628981bb3817f482f2 Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.292666 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ca0c-account-create-update-58lql" event={"ID":"5ff28e6a-1072-4f12-8b90-e62346cdcf59","Type":"ContainerStarted","Data":"c9886946b110171f5841fba9e99bf53c4371f9f234eb3f9f04a9526701119715"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.303142 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bmtxq" event={"ID":"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae","Type":"ContainerStarted","Data":"bc172ec9218bd5320671385a510a8f4bfc4f02d3f1cd228c58dd5d2e17ed9c80"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.305007 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerStarted","Data":"7540b773b8d37977087f40477bffde81ba0cc4061bf4eb5540ada19262a11a67"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.308438 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7r8zr" event={"ID":"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0","Type":"ContainerStarted","Data":"711e0c762b5401056c1b52d6422ef67b2039bedc915f08f88fff903cf4e75653"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.310718 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtgkn" event={"ID":"bdc282d4-ce0c-40cc-be72-407c31c0effe","Type":"ContainerStarted","Data":"bd60b0b69531fbf2908d2bf78d673c63403831348f233c7b9ad5c093184929cd"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.310749 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtgkn" event={"ID":"bdc282d4-ce0c-40cc-be72-407c31c0effe","Type":"ContainerStarted","Data":"4e3ab4e5edb1232f20b38eed1d7f8f6c49193045031458ddcd8bc2d9515811c9"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.311519 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" event={"ID":"9485d7a3-a28a-47b1-b949-197adac8d89c","Type":"ContainerStarted","Data":"43038e1d0bc5d2cd0e2bad2d7fdf3b62a330e9e5e2e905628981bb3817f482f2"} Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.325919 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-7r8zr" podStartSLOduration=2.325904817 podStartE2EDuration="2.325904817s" podCreationTimestamp="2026-01-21 13:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:44.321753712 +0000 UTC m=+1365.284784255" watchObservedRunningTime="2026-01-21 13:31:44.325904817 +0000 UTC m=+1365.288935360" Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.339703 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-mtgkn" podStartSLOduration=2.339683668 podStartE2EDuration="2.339683668s" podCreationTimestamp="2026-01-21 13:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:31:44.338403883 +0000 UTC m=+1365.301434426" watchObservedRunningTime="2026-01-21 13:31:44.339683668 +0000 UTC m=+1365.302714211" Jan 21 13:31:44 crc kubenswrapper[4959]: I0121 13:31:44.403238 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-36a4-account-create-update-czbvk"] Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.338229 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerStarted","Data":"48d40c153141fcaaaea2a7d4fc14fc0c0f666a4d31406057a9e6a5a1b3bd220d"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.339234 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.343533 4959 generic.go:334] "Generic (PLEG): container finished" podID="d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" containerID="d82362bd7af4309c93acccac6858de816a4168c87efe0f3b9699c24ef54e6efd" exitCode=0 Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.343654 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7r8zr" event={"ID":"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0","Type":"ContainerDied","Data":"d82362bd7af4309c93acccac6858de816a4168c87efe0f3b9699c24ef54e6efd"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.346244 4959 generic.go:334] "Generic (PLEG): container finished" podID="bdc282d4-ce0c-40cc-be72-407c31c0effe" containerID="bd60b0b69531fbf2908d2bf78d673c63403831348f233c7b9ad5c093184929cd" exitCode=0 Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.346299 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtgkn" event={"ID":"bdc282d4-ce0c-40cc-be72-407c31c0effe","Type":"ContainerDied","Data":"bd60b0b69531fbf2908d2bf78d673c63403831348f233c7b9ad5c093184929cd"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.347473 4959 generic.go:334] "Generic (PLEG): container finished" podID="2f15817e-88bb-4e43-afd3-024e73dd60f5" containerID="82dac5a2a74317ccb05e0df4c5513e86e7a281c83e0d8862107e3d318256e80b" exitCode=0 Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.347528 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" event={"ID":"2f15817e-88bb-4e43-afd3-024e73dd60f5","Type":"ContainerDied","Data":"82dac5a2a74317ccb05e0df4c5513e86e7a281c83e0d8862107e3d318256e80b"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.347547 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" event={"ID":"2f15817e-88bb-4e43-afd3-024e73dd60f5","Type":"ContainerStarted","Data":"e9b7806c4ed821c453e96c689b8817e9e570101847a93e3d9601fee0af2d8201"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.349537 4959 generic.go:334] "Generic (PLEG): container finished" podID="9485d7a3-a28a-47b1-b949-197adac8d89c" containerID="9ce6d7d0a4d4d15cd6962602c402fb6e7cb0c3a26a30cc76c5fbc7ab54331077" exitCode=0 Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.349578 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" event={"ID":"9485d7a3-a28a-47b1-b949-197adac8d89c","Type":"ContainerDied","Data":"9ce6d7d0a4d4d15cd6962602c402fb6e7cb0c3a26a30cc76c5fbc7ab54331077"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.350831 4959 generic.go:334] "Generic (PLEG): container finished" podID="5ff28e6a-1072-4f12-8b90-e62346cdcf59" containerID="5875645ac4137f6ee5e40bf3414e203b8181fa3c05e36df4b548a2cf26fd40eb" exitCode=0 Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.350883 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ca0c-account-create-update-58lql" event={"ID":"5ff28e6a-1072-4f12-8b90-e62346cdcf59","Type":"ContainerDied","Data":"5875645ac4137f6ee5e40bf3414e203b8181fa3c05e36df4b548a2cf26fd40eb"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.351916 4959 generic.go:334] "Generic (PLEG): container finished" podID="458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" containerID="4853647e155c4126a5ca3cb8cd41d5182a2d27344cf1a3ee2f3b722b0f5601ae" exitCode=0 Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.351953 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bmtxq" event={"ID":"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae","Type":"ContainerDied","Data":"4853647e155c4126a5ca3cb8cd41d5182a2d27344cf1a3ee2f3b722b0f5601ae"} Jan 21 13:31:45 crc kubenswrapper[4959]: I0121 13:31:45.361078 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.338934161 podStartE2EDuration="5.361065735s" podCreationTimestamp="2026-01-21 13:31:40 +0000 UTC" firstStartedPulling="2026-01-21 13:31:41.072883203 +0000 UTC m=+1362.035913746" lastFinishedPulling="2026-01-21 13:31:45.095014777 +0000 UTC m=+1366.058045320" observedRunningTime="2026-01-21 13:31:45.359667496 +0000 UTC m=+1366.322698039" watchObservedRunningTime="2026-01-21 13:31:45.361065735 +0000 UTC m=+1366.324096278" Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.768712 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.843448 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9485d7a3-a28a-47b1-b949-197adac8d89c-operator-scripts\") pod \"9485d7a3-a28a-47b1-b949-197adac8d89c\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.843785 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtbfn\" (UniqueName: \"kubernetes.io/projected/9485d7a3-a28a-47b1-b949-197adac8d89c-kube-api-access-rtbfn\") pod \"9485d7a3-a28a-47b1-b949-197adac8d89c\" (UID: \"9485d7a3-a28a-47b1-b949-197adac8d89c\") " Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.843899 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9485d7a3-a28a-47b1-b949-197adac8d89c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9485d7a3-a28a-47b1-b949-197adac8d89c" (UID: "9485d7a3-a28a-47b1-b949-197adac8d89c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.844301 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9485d7a3-a28a-47b1-b949-197adac8d89c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.851779 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9485d7a3-a28a-47b1-b949-197adac8d89c-kube-api-access-rtbfn" (OuterVolumeSpecName: "kube-api-access-rtbfn") pod "9485d7a3-a28a-47b1-b949-197adac8d89c" (UID: "9485d7a3-a28a-47b1-b949-197adac8d89c"). InnerVolumeSpecName "kube-api-access-rtbfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.946354 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtbfn\" (UniqueName: \"kubernetes.io/projected/9485d7a3-a28a-47b1-b949-197adac8d89c-kube-api-access-rtbfn\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:46 crc kubenswrapper[4959]: I0121 13:31:46.996115 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.005475 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.021127 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.048936 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdc282d4-ce0c-40cc-be72-407c31c0effe-operator-scripts\") pod \"bdc282d4-ce0c-40cc-be72-407c31c0effe\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049026 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049044 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbjsg\" (UniqueName: \"kubernetes.io/projected/2f15817e-88bb-4e43-afd3-024e73dd60f5-kube-api-access-lbjsg\") pod \"2f15817e-88bb-4e43-afd3-024e73dd60f5\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049081 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff28e6a-1072-4f12-8b90-e62346cdcf59-operator-scripts\") pod \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049161 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwlp5\" (UniqueName: \"kubernetes.io/projected/bdc282d4-ce0c-40cc-be72-407c31c0effe-kube-api-access-wwlp5\") pod \"bdc282d4-ce0c-40cc-be72-407c31c0effe\" (UID: \"bdc282d4-ce0c-40cc-be72-407c31c0effe\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049214 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f15817e-88bb-4e43-afd3-024e73dd60f5-operator-scripts\") pod \"2f15817e-88bb-4e43-afd3-024e73dd60f5\" (UID: \"2f15817e-88bb-4e43-afd3-024e73dd60f5\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049395 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6z8d\" (UniqueName: \"kubernetes.io/projected/5ff28e6a-1072-4f12-8b90-e62346cdcf59-kube-api-access-t6z8d\") pod \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\" (UID: \"5ff28e6a-1072-4f12-8b90-e62346cdcf59\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049507 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc282d4-ce0c-40cc-be72-407c31c0effe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdc282d4-ce0c-40cc-be72-407c31c0effe" (UID: "bdc282d4-ce0c-40cc-be72-407c31c0effe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049844 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdc282d4-ce0c-40cc-be72-407c31c0effe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.049951 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff28e6a-1072-4f12-8b90-e62346cdcf59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ff28e6a-1072-4f12-8b90-e62346cdcf59" (UID: "5ff28e6a-1072-4f12-8b90-e62346cdcf59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.050817 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f15817e-88bb-4e43-afd3-024e73dd60f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f15817e-88bb-4e43-afd3-024e73dd60f5" (UID: "2f15817e-88bb-4e43-afd3-024e73dd60f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.054974 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff28e6a-1072-4f12-8b90-e62346cdcf59-kube-api-access-t6z8d" (OuterVolumeSpecName: "kube-api-access-t6z8d") pod "5ff28e6a-1072-4f12-8b90-e62346cdcf59" (UID: "5ff28e6a-1072-4f12-8b90-e62346cdcf59"). InnerVolumeSpecName "kube-api-access-t6z8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.056590 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc282d4-ce0c-40cc-be72-407c31c0effe-kube-api-access-wwlp5" (OuterVolumeSpecName: "kube-api-access-wwlp5") pod "bdc282d4-ce0c-40cc-be72-407c31c0effe" (UID: "bdc282d4-ce0c-40cc-be72-407c31c0effe"). InnerVolumeSpecName "kube-api-access-wwlp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.058658 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.060723 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f15817e-88bb-4e43-afd3-024e73dd60f5-kube-api-access-lbjsg" (OuterVolumeSpecName: "kube-api-access-lbjsg") pod "2f15817e-88bb-4e43-afd3-024e73dd60f5" (UID: "2f15817e-88bb-4e43-afd3-024e73dd60f5"). InnerVolumeSpecName "kube-api-access-lbjsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.150965 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-operator-scripts\") pod \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151016 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-operator-scripts\") pod \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151055 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zklvn\" (UniqueName: \"kubernetes.io/projected/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-kube-api-access-zklvn\") pod \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\" (UID: \"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151167 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2txl5\" (UniqueName: \"kubernetes.io/projected/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-kube-api-access-2txl5\") pod \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\" (UID: \"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0\") " Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151477 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" (UID: "d874dcfc-b5e9-42a8-bde8-fe6d8e998af0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151512 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6z8d\" (UniqueName: \"kubernetes.io/projected/5ff28e6a-1072-4f12-8b90-e62346cdcf59-kube-api-access-t6z8d\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151529 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbjsg\" (UniqueName: \"kubernetes.io/projected/2f15817e-88bb-4e43-afd3-024e73dd60f5-kube-api-access-lbjsg\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151539 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff28e6a-1072-4f12-8b90-e62346cdcf59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151549 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwlp5\" (UniqueName: \"kubernetes.io/projected/bdc282d4-ce0c-40cc-be72-407c31c0effe-kube-api-access-wwlp5\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151557 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f15817e-88bb-4e43-afd3-024e73dd60f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.151677 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" (UID: "458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.155284 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-kube-api-access-zklvn" (OuterVolumeSpecName: "kube-api-access-zklvn") pod "458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" (UID: "458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae"). InnerVolumeSpecName "kube-api-access-zklvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.155340 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-kube-api-access-2txl5" (OuterVolumeSpecName: "kube-api-access-2txl5") pod "d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" (UID: "d874dcfc-b5e9-42a8-bde8-fe6d8e998af0"). InnerVolumeSpecName "kube-api-access-2txl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.253719 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.253773 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.253788 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zklvn\" (UniqueName: \"kubernetes.io/projected/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae-kube-api-access-zklvn\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.253822 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2txl5\" (UniqueName: \"kubernetes.io/projected/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0-kube-api-access-2txl5\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.369876 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.369864 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-54c0-account-create-update-6sn5l" event={"ID":"9485d7a3-a28a-47b1-b949-197adac8d89c","Type":"ContainerDied","Data":"43038e1d0bc5d2cd0e2bad2d7fdf3b62a330e9e5e2e905628981bb3817f482f2"} Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.370010 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43038e1d0bc5d2cd0e2bad2d7fdf3b62a330e9e5e2e905628981bb3817f482f2" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.371441 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca0c-account-create-update-58lql" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.371446 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ca0c-account-create-update-58lql" event={"ID":"5ff28e6a-1072-4f12-8b90-e62346cdcf59","Type":"ContainerDied","Data":"c9886946b110171f5841fba9e99bf53c4371f9f234eb3f9f04a9526701119715"} Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.371490 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9886946b110171f5841fba9e99bf53c4371f9f234eb3f9f04a9526701119715" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.373139 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bmtxq" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.373186 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bmtxq" event={"ID":"458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae","Type":"ContainerDied","Data":"bc172ec9218bd5320671385a510a8f4bfc4f02d3f1cd228c58dd5d2e17ed9c80"} Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.373201 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc172ec9218bd5320671385a510a8f4bfc4f02d3f1cd228c58dd5d2e17ed9c80" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.375801 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7r8zr" event={"ID":"d874dcfc-b5e9-42a8-bde8-fe6d8e998af0","Type":"ContainerDied","Data":"711e0c762b5401056c1b52d6422ef67b2039bedc915f08f88fff903cf4e75653"} Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.375826 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711e0c762b5401056c1b52d6422ef67b2039bedc915f08f88fff903cf4e75653" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.375858 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7r8zr" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.381853 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtgkn" event={"ID":"bdc282d4-ce0c-40cc-be72-407c31c0effe","Type":"ContainerDied","Data":"4e3ab4e5edb1232f20b38eed1d7f8f6c49193045031458ddcd8bc2d9515811c9"} Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.381873 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e3ab4e5edb1232f20b38eed1d7f8f6c49193045031458ddcd8bc2d9515811c9" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.381938 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtgkn" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.384489 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" event={"ID":"2f15817e-88bb-4e43-afd3-024e73dd60f5","Type":"ContainerDied","Data":"e9b7806c4ed821c453e96c689b8817e9e570101847a93e3d9601fee0af2d8201"} Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.384537 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b7806c4ed821c453e96c689b8817e9e570101847a93e3d9601fee0af2d8201" Jan 21 13:31:47 crc kubenswrapper[4959]: I0121 13:31:47.384597 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-36a4-account-create-update-czbvk" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.326052 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.400324 4959 generic.go:334] "Generic (PLEG): container finished" podID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerID="f3dc89d0c9f62f630cae2a945c9dd4602ffadce1b4d490879f863549fc28ad34" exitCode=0 Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.400373 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8c7687b4-bsf2j" event={"ID":"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54","Type":"ContainerDied","Data":"f3dc89d0c9f62f630cae2a945c9dd4602ffadce1b4d490879f863549fc28ad34"} Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.751376 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.785689 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-httpd-config\") pod \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.786015 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjsd5\" (UniqueName: \"kubernetes.io/projected/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-kube-api-access-sjsd5\") pod \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.786120 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-config\") pod \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.786242 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-combined-ca-bundle\") pod \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.786554 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-ovndb-tls-certs\") pod \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\" (UID: \"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54\") " Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.796410 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-kube-api-access-sjsd5" (OuterVolumeSpecName: "kube-api-access-sjsd5") pod "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" (UID: "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54"). InnerVolumeSpecName "kube-api-access-sjsd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.810802 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" (UID: "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.853987 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" (UID: "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.866435 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-config" (OuterVolumeSpecName: "config") pod "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" (UID: "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.868813 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" (UID: "44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.888657 4959 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.888778 4959 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.888791 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjsd5\" (UniqueName: \"kubernetes.io/projected/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-kube-api-access-sjsd5\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.888802 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:48 crc kubenswrapper[4959]: I0121 13:31:48.888810 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:49 crc kubenswrapper[4959]: I0121 13:31:49.410218 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8c7687b4-bsf2j" event={"ID":"44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54","Type":"ContainerDied","Data":"080d0dd96d035b216bcdea78627c947493ccc5d6459b921a79fb85491bc7bf9d"} Jan 21 13:31:49 crc kubenswrapper[4959]: I0121 13:31:49.410603 4959 scope.go:117] "RemoveContainer" containerID="cd03d59c2a16a039669d3604509298a6e18d85b9c654dd979b8c3bf08234277e" Jan 21 13:31:49 crc kubenswrapper[4959]: I0121 13:31:49.410272 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d8c7687b4-bsf2j" Jan 21 13:31:49 crc kubenswrapper[4959]: I0121 13:31:49.441041 4959 scope.go:117] "RemoveContainer" containerID="f3dc89d0c9f62f630cae2a945c9dd4602ffadce1b4d490879f863549fc28ad34" Jan 21 13:31:49 crc kubenswrapper[4959]: I0121 13:31:49.443508 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d8c7687b4-bsf2j"] Jan 21 13:31:49 crc kubenswrapper[4959]: I0121 13:31:49.451780 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d8c7687b4-bsf2j"] Jan 21 13:31:51 crc kubenswrapper[4959]: I0121 13:31:51.295641 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" path="/var/lib/kubelet/pods/44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54/volumes" Jan 21 13:31:52 crc kubenswrapper[4959]: I0121 13:31:52.581676 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:52 crc kubenswrapper[4959]: I0121 13:31:52.582007 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-central-agent" containerID="cri-o://ea9b601f18215ecbf86794987508d57c34f25374df87db4df8cb04432135f650" gracePeriod=30 Jan 21 13:31:52 crc kubenswrapper[4959]: I0121 13:31:52.582438 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-notification-agent" containerID="cri-o://bc51e0f32efee35699f42a04bd0213fb4a363464e46205be5ed59ed6d1f89ad9" gracePeriod=30 Jan 21 13:31:52 crc kubenswrapper[4959]: I0121 13:31:52.582719 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="sg-core" containerID="cri-o://7540b773b8d37977087f40477bffde81ba0cc4061bf4eb5540ada19262a11a67" gracePeriod=30 Jan 21 13:31:52 crc kubenswrapper[4959]: I0121 13:31:52.583980 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="proxy-httpd" containerID="cri-o://48d40c153141fcaaaea2a7d4fc14fc0c0f666a4d31406057a9e6a5a1b3bd220d" gracePeriod=30 Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.024834 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d6rms"] Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025700 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-api" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025741 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-api" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025758 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff28e6a-1072-4f12-8b90-e62346cdcf59" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025766 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff28e6a-1072-4f12-8b90-e62346cdcf59" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025778 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-httpd" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025785 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-httpd" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025807 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025814 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025827 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f15817e-88bb-4e43-afd3-024e73dd60f5" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025834 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f15817e-88bb-4e43-afd3-024e73dd60f5" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025856 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9485d7a3-a28a-47b1-b949-197adac8d89c" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025864 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9485d7a3-a28a-47b1-b949-197adac8d89c" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025882 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc282d4-ce0c-40cc-be72-407c31c0effe" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025892 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc282d4-ce0c-40cc-be72-407c31c0effe" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: E0121 13:31:53.025900 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.025909 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026123 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9485d7a3-a28a-47b1-b949-197adac8d89c" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026144 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026158 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-httpd" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026168 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff28e6a-1072-4f12-8b90-e62346cdcf59" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026180 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f15817e-88bb-4e43-afd3-024e73dd60f5" containerName="mariadb-account-create-update" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026190 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc282d4-ce0c-40cc-be72-407c31c0effe" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026202 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" containerName="mariadb-database-create" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.026212 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ffb7d9-1580-4d6c-bceb-5fa26a5f6d54" containerName="neutron-api" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.027314 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.031949 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q45jc" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.031968 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.033072 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.035326 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d6rms"] Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.084625 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-config-data\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.084770 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-scripts\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.084826 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5n7\" (UniqueName: \"kubernetes.io/projected/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-kube-api-access-hj5n7\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.084915 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.186250 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.186322 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-config-data\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.186405 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-scripts\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.186447 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5n7\" (UniqueName: \"kubernetes.io/projected/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-kube-api-access-hj5n7\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.191853 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-scripts\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.192552 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-config-data\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.192634 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.203127 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5n7\" (UniqueName: \"kubernetes.io/projected/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-kube-api-access-hj5n7\") pod \"nova-cell0-conductor-db-sync-d6rms\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.350040 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455085 4959 generic.go:334] "Generic (PLEG): container finished" podID="747d303f-591f-4630-aba1-9a53e6d2f515" containerID="48d40c153141fcaaaea2a7d4fc14fc0c0f666a4d31406057a9e6a5a1b3bd220d" exitCode=0 Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455522 4959 generic.go:334] "Generic (PLEG): container finished" podID="747d303f-591f-4630-aba1-9a53e6d2f515" containerID="7540b773b8d37977087f40477bffde81ba0cc4061bf4eb5540ada19262a11a67" exitCode=2 Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455538 4959 generic.go:334] "Generic (PLEG): container finished" podID="747d303f-591f-4630-aba1-9a53e6d2f515" containerID="bc51e0f32efee35699f42a04bd0213fb4a363464e46205be5ed59ed6d1f89ad9" exitCode=0 Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455546 4959 generic.go:334] "Generic (PLEG): container finished" podID="747d303f-591f-4630-aba1-9a53e6d2f515" containerID="ea9b601f18215ecbf86794987508d57c34f25374df87db4df8cb04432135f650" exitCode=0 Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455594 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerDied","Data":"48d40c153141fcaaaea2a7d4fc14fc0c0f666a4d31406057a9e6a5a1b3bd220d"} Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455627 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerDied","Data":"7540b773b8d37977087f40477bffde81ba0cc4061bf4eb5540ada19262a11a67"} Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455684 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerDied","Data":"bc51e0f32efee35699f42a04bd0213fb4a363464e46205be5ed59ed6d1f89ad9"} Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.455698 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerDied","Data":"ea9b601f18215ecbf86794987508d57c34f25374df87db4df8cb04432135f650"} Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.581297 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694199 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-combined-ca-bundle\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694578 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-config-data\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694622 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-scripts\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694707 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-run-httpd\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694740 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-log-httpd\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694764 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89gs5\" (UniqueName: \"kubernetes.io/projected/747d303f-591f-4630-aba1-9a53e6d2f515-kube-api-access-89gs5\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.694803 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-sg-core-conf-yaml\") pod \"747d303f-591f-4630-aba1-9a53e6d2f515\" (UID: \"747d303f-591f-4630-aba1-9a53e6d2f515\") " Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.696372 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.696440 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.700287 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-scripts" (OuterVolumeSpecName: "scripts") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.704424 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747d303f-591f-4630-aba1-9a53e6d2f515-kube-api-access-89gs5" (OuterVolumeSpecName: "kube-api-access-89gs5") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "kube-api-access-89gs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.723237 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.780314 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.796533 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.796569 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.796582 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.796592 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.796602 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747d303f-591f-4630-aba1-9a53e6d2f515-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.796611 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89gs5\" (UniqueName: \"kubernetes.io/projected/747d303f-591f-4630-aba1-9a53e6d2f515-kube-api-access-89gs5\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.800574 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-config-data" (OuterVolumeSpecName: "config-data") pod "747d303f-591f-4630-aba1-9a53e6d2f515" (UID: "747d303f-591f-4630-aba1-9a53e6d2f515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.837330 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d6rms"] Jan 21 13:31:53 crc kubenswrapper[4959]: W0121 13:31:53.839135 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c2a88cc_56f3_435c_9d5c_b64a38fc25b8.slice/crio-1dd8e5e012adc7fb84fd02aec568dffb2ee9f80feab69a7b61a80e41ec9eca98 WatchSource:0}: Error finding container 1dd8e5e012adc7fb84fd02aec568dffb2ee9f80feab69a7b61a80e41ec9eca98: Status 404 returned error can't find the container with id 1dd8e5e012adc7fb84fd02aec568dffb2ee9f80feab69a7b61a80e41ec9eca98 Jan 21 13:31:53 crc kubenswrapper[4959]: I0121 13:31:53.898505 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747d303f-591f-4630-aba1-9a53e6d2f515-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.466078 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d6rms" event={"ID":"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8","Type":"ContainerStarted","Data":"1dd8e5e012adc7fb84fd02aec568dffb2ee9f80feab69a7b61a80e41ec9eca98"} Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.468527 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747d303f-591f-4630-aba1-9a53e6d2f515","Type":"ContainerDied","Data":"49a4873da9a48a435cbc6add16a82f938160e0c2a6dcdde17e959c4641e8ef0b"} Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.468657 4959 scope.go:117] "RemoveContainer" containerID="48d40c153141fcaaaea2a7d4fc14fc0c0f666a4d31406057a9e6a5a1b3bd220d" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.468830 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.504993 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.507044 4959 scope.go:117] "RemoveContainer" containerID="7540b773b8d37977087f40477bffde81ba0cc4061bf4eb5540ada19262a11a67" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.515048 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.533616 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:54 crc kubenswrapper[4959]: E0121 13:31:54.534363 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="proxy-httpd" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.534451 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="proxy-httpd" Jan 21 13:31:54 crc kubenswrapper[4959]: E0121 13:31:54.534547 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="sg-core" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.534627 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="sg-core" Jan 21 13:31:54 crc kubenswrapper[4959]: E0121 13:31:54.534707 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-notification-agent" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.534777 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-notification-agent" Jan 21 13:31:54 crc kubenswrapper[4959]: E0121 13:31:54.534852 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-central-agent" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.534936 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-central-agent" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.536364 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="sg-core" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.540255 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="proxy-httpd" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.540350 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-central-agent" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.540407 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" containerName="ceilometer-notification-agent" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.542145 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.542354 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.544971 4959 scope.go:117] "RemoveContainer" containerID="bc51e0f32efee35699f42a04bd0213fb4a363464e46205be5ed59ed6d1f89ad9" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.547251 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.547743 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.583773 4959 scope.go:117] "RemoveContainer" containerID="ea9b601f18215ecbf86794987508d57c34f25374df87db4df8cb04432135f650" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613375 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613420 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613481 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-log-httpd\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613538 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-run-httpd\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613566 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-config-data\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613586 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-scripts\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.613633 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xl88\" (UniqueName: \"kubernetes.io/projected/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-kube-api-access-8xl88\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715491 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xl88\" (UniqueName: \"kubernetes.io/projected/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-kube-api-access-8xl88\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715581 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715609 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715666 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-log-httpd\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715712 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-run-httpd\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715747 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-config-data\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.715771 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-scripts\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.717168 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-log-httpd\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.717675 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-run-httpd\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.722184 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-scripts\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.722772 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.722833 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-config-data\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.733280 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xl88\" (UniqueName: \"kubernetes.io/projected/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-kube-api-access-8xl88\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.740626 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " pod="openstack/ceilometer-0" Jan 21 13:31:54 crc kubenswrapper[4959]: I0121 13:31:54.871847 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:31:55 crc kubenswrapper[4959]: I0121 13:31:55.302934 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747d303f-591f-4630-aba1-9a53e6d2f515" path="/var/lib/kubelet/pods/747d303f-591f-4630-aba1-9a53e6d2f515/volumes" Jan 21 13:31:55 crc kubenswrapper[4959]: I0121 13:31:55.318408 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:31:55 crc kubenswrapper[4959]: W0121 13:31:55.333510 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccabc79a_7b30_4e5f_8b1f_a978016f0d54.slice/crio-f24dd9863a9f96ac81098353823951ebd32da83da49d61bb7b75352094ffa05f WatchSource:0}: Error finding container f24dd9863a9f96ac81098353823951ebd32da83da49d61bb7b75352094ffa05f: Status 404 returned error can't find the container with id f24dd9863a9f96ac81098353823951ebd32da83da49d61bb7b75352094ffa05f Jan 21 13:31:55 crc kubenswrapper[4959]: I0121 13:31:55.496392 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerStarted","Data":"f24dd9863a9f96ac81098353823951ebd32da83da49d61bb7b75352094ffa05f"} Jan 21 13:31:56 crc kubenswrapper[4959]: I0121 13:31:56.505435 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerStarted","Data":"749f06628660990176001c5674ad44fee6c98e817af25d1df2e06fa4c4d821d7"} Jan 21 13:32:01 crc kubenswrapper[4959]: I0121 13:32:01.558574 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d6rms" event={"ID":"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8","Type":"ContainerStarted","Data":"4d9827f4fd4fada3519d3cd2c23f7aed651e676e6e0f76bd7bce8e29a0832956"} Jan 21 13:32:01 crc kubenswrapper[4959]: I0121 13:32:01.561828 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerStarted","Data":"810b86129a96dda08a25ff8d0de5b6a68815627d7d87649add41508c4eaa3e50"} Jan 21 13:32:01 crc kubenswrapper[4959]: I0121 13:32:01.583353 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-d6rms" podStartSLOduration=1.419244692 podStartE2EDuration="8.583336098s" podCreationTimestamp="2026-01-21 13:31:53 +0000 UTC" firstStartedPulling="2026-01-21 13:31:53.841961737 +0000 UTC m=+1374.804992280" lastFinishedPulling="2026-01-21 13:32:01.006053143 +0000 UTC m=+1381.969083686" observedRunningTime="2026-01-21 13:32:01.579946965 +0000 UTC m=+1382.542977508" watchObservedRunningTime="2026-01-21 13:32:01.583336098 +0000 UTC m=+1382.546366641" Jan 21 13:32:02 crc kubenswrapper[4959]: I0121 13:32:02.570441 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerStarted","Data":"dff9d8338451537e27010d4ebfe0574cbc0cc276cf331df16906cc70d1be47d6"} Jan 21 13:32:03 crc kubenswrapper[4959]: I0121 13:32:03.581899 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerStarted","Data":"b2e1847b4543709a0278403a037d974f8a85d30ba0f742bf9d779eb1907b7b6d"} Jan 21 13:32:03 crc kubenswrapper[4959]: I0121 13:32:03.582586 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:32:03 crc kubenswrapper[4959]: I0121 13:32:03.606961 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.783617934 podStartE2EDuration="9.606939982s" podCreationTimestamp="2026-01-21 13:31:54 +0000 UTC" firstStartedPulling="2026-01-21 13:31:55.338184926 +0000 UTC m=+1376.301215469" lastFinishedPulling="2026-01-21 13:32:03.161506974 +0000 UTC m=+1384.124537517" observedRunningTime="2026-01-21 13:32:03.601568294 +0000 UTC m=+1384.564598847" watchObservedRunningTime="2026-01-21 13:32:03.606939982 +0000 UTC m=+1384.569970525" Jan 21 13:32:14 crc kubenswrapper[4959]: I0121 13:32:14.696608 4959 generic.go:334] "Generic (PLEG): container finished" podID="3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" containerID="4d9827f4fd4fada3519d3cd2c23f7aed651e676e6e0f76bd7bce8e29a0832956" exitCode=0 Jan 21 13:32:14 crc kubenswrapper[4959]: I0121 13:32:14.696864 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d6rms" event={"ID":"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8","Type":"ContainerDied","Data":"4d9827f4fd4fada3519d3cd2c23f7aed651e676e6e0f76bd7bce8e29a0832956"} Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.027828 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.221249 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-combined-ca-bundle\") pod \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.221466 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj5n7\" (UniqueName: \"kubernetes.io/projected/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-kube-api-access-hj5n7\") pod \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.221505 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-scripts\") pod \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.221566 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-config-data\") pod \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\" (UID: \"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8\") " Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.227093 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-kube-api-access-hj5n7" (OuterVolumeSpecName: "kube-api-access-hj5n7") pod "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" (UID: "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8"). InnerVolumeSpecName "kube-api-access-hj5n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.227206 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-scripts" (OuterVolumeSpecName: "scripts") pod "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" (UID: "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.245886 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-config-data" (OuterVolumeSpecName: "config-data") pod "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" (UID: "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.252447 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" (UID: "3c2a88cc-56f3-435c-9d5c-b64a38fc25b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.324539 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.324578 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.324592 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj5n7\" (UniqueName: \"kubernetes.io/projected/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-kube-api-access-hj5n7\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.324606 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.715872 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-d6rms" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.715829 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-d6rms" event={"ID":"3c2a88cc-56f3-435c-9d5c-b64a38fc25b8","Type":"ContainerDied","Data":"1dd8e5e012adc7fb84fd02aec568dffb2ee9f80feab69a7b61a80e41ec9eca98"} Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.716066 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd8e5e012adc7fb84fd02aec568dffb2ee9f80feab69a7b61a80e41ec9eca98" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.814164 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 13:32:16 crc kubenswrapper[4959]: E0121 13:32:16.814515 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" containerName="nova-cell0-conductor-db-sync" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.814530 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" containerName="nova-cell0-conductor-db-sync" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.814677 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" containerName="nova-cell0-conductor-db-sync" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.815267 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.817382 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.817476 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q45jc" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.826367 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.934654 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s44h\" (UniqueName: \"kubernetes.io/projected/3df244ce-5f7c-4173-8ec6-d64e2a462876-kube-api-access-5s44h\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.934865 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df244ce-5f7c-4173-8ec6-d64e2a462876-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:16 crc kubenswrapper[4959]: I0121 13:32:16.935002 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df244ce-5f7c-4173-8ec6-d64e2a462876-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.036158 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df244ce-5f7c-4173-8ec6-d64e2a462876-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.036254 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df244ce-5f7c-4173-8ec6-d64e2a462876-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.036331 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s44h\" (UniqueName: \"kubernetes.io/projected/3df244ce-5f7c-4173-8ec6-d64e2a462876-kube-api-access-5s44h\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.042142 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df244ce-5f7c-4173-8ec6-d64e2a462876-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.046185 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df244ce-5f7c-4173-8ec6-d64e2a462876-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.058747 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s44h\" (UniqueName: \"kubernetes.io/projected/3df244ce-5f7c-4173-8ec6-d64e2a462876-kube-api-access-5s44h\") pod \"nova-cell0-conductor-0\" (UID: \"3df244ce-5f7c-4173-8ec6-d64e2a462876\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.131945 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.559200 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 13:32:17 crc kubenswrapper[4959]: I0121 13:32:17.726717 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3df244ce-5f7c-4173-8ec6-d64e2a462876","Type":"ContainerStarted","Data":"d717528108f409b069dc2de99d00dc7827c056da762211b06318b192a294b41a"} Jan 21 13:32:18 crc kubenswrapper[4959]: I0121 13:32:18.737904 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3df244ce-5f7c-4173-8ec6-d64e2a462876","Type":"ContainerStarted","Data":"7f035b6a6bf0366cfa94d2b4f05d1b9945a63e25d633327f8d264df8447410a5"} Jan 21 13:32:18 crc kubenswrapper[4959]: I0121 13:32:18.738245 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:18 crc kubenswrapper[4959]: I0121 13:32:18.777823 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.777803948 podStartE2EDuration="2.777803948s" podCreationTimestamp="2026-01-21 13:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:18.769333514 +0000 UTC m=+1399.732364057" watchObservedRunningTime="2026-01-21 13:32:18.777803948 +0000 UTC m=+1399.740834491" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.162152 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.627176 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-j649g"] Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.628310 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.630400 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-scripts\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.630553 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-config-data\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.630598 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.630624 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76b29\" (UniqueName: \"kubernetes.io/projected/4a8e32a2-8454-4269-a372-96d891edcdda-kube-api-access-76b29\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.632380 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.636205 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.642311 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j649g"] Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.739308 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-scripts\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.739480 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-config-data\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.739520 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.739556 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76b29\" (UniqueName: \"kubernetes.io/projected/4a8e32a2-8454-4269-a372-96d891edcdda-kube-api-access-76b29\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.746860 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.749466 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-scripts\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.746559 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-config-data\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.778884 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76b29\" (UniqueName: \"kubernetes.io/projected/4a8e32a2-8454-4269-a372-96d891edcdda-kube-api-access-76b29\") pod \"nova-cell0-cell-mapping-j649g\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.921801 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.923904 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.930037 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.942919 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.944696 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.948061 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vl6\" (UniqueName: \"kubernetes.io/projected/e41b2cea-29e2-4b7f-bcc6-b920099a3872-kube-api-access-z9vl6\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.948249 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-config-data\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.948348 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41b2cea-29e2-4b7f-bcc6-b920099a3872-logs\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.948409 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.952684 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.953310 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:22 crc kubenswrapper[4959]: I0121 13:32:22.958353 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.002441 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.049728 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.051796 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.055163 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6st8\" (UniqueName: \"kubernetes.io/projected/c4fbea3d-bd4f-4889-be56-186f88e4e96c-kube-api-access-q6st8\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.057378 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vl6\" (UniqueName: \"kubernetes.io/projected/e41b2cea-29e2-4b7f-bcc6-b920099a3872-kube-api-access-z9vl6\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.062360 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-config-data\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.062594 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-config-data\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.062791 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4fbea3d-bd4f-4889-be56-186f88e4e96c-logs\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.062997 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41b2cea-29e2-4b7f-bcc6-b920099a3872-logs\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.063202 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.063375 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.065978 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41b2cea-29e2-4b7f-bcc6-b920099a3872-logs\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.072868 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.073378 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.074141 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-config-data\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.101811 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165081 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6st8\" (UniqueName: \"kubernetes.io/projected/c4fbea3d-bd4f-4889-be56-186f88e4e96c-kube-api-access-q6st8\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165216 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-config-data\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165262 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmsr\" (UniqueName: \"kubernetes.io/projected/0b08f48f-c239-4fe2-9f70-f35c0877fd64-kube-api-access-lvmsr\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165287 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4fbea3d-bd4f-4889-be56-186f88e4e96c-logs\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165304 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165368 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.165387 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-config-data\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.169959 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4fbea3d-bd4f-4889-be56-186f88e4e96c-logs\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.182790 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vl6\" (UniqueName: \"kubernetes.io/projected/e41b2cea-29e2-4b7f-bcc6-b920099a3872-kube-api-access-z9vl6\") pod \"nova-metadata-0\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.191181 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-cp8q7"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.192702 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.195652 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-config-data\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.200011 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.209829 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-cp8q7"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.213746 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6st8\" (UniqueName: \"kubernetes.io/projected/c4fbea3d-bd4f-4889-be56-186f88e4e96c-kube-api-access-q6st8\") pod \"nova-api-0\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266684 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-config\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266736 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266759 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266791 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-config-data\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266811 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvsj\" (UniqueName: \"kubernetes.io/projected/fa6bef81-a325-412f-b2c5-80d6b904abd3-kube-api-access-9lvsj\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266887 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266920 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvmsr\" (UniqueName: \"kubernetes.io/projected/0b08f48f-c239-4fe2-9f70-f35c0877fd64-kube-api-access-lvmsr\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.266940 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.270034 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.276552 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-config-data\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.299186 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.323717 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvmsr\" (UniqueName: \"kubernetes.io/projected/0b08f48f-c239-4fe2-9f70-f35c0877fd64-kube-api-access-lvmsr\") pod \"nova-scheduler-0\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.331376 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.332887 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.334141 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.337636 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.338456 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.377358 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-config\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.377718 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.377861 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.378020 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lvsj\" (UniqueName: \"kubernetes.io/projected/fa6bef81-a325-412f-b2c5-80d6b904abd3-kube-api-access-9lvsj\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.378204 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.378397 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.378607 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j26c\" (UniqueName: \"kubernetes.io/projected/198f274c-7d83-4c1d-9145-083c1987df6c-kube-api-access-4j26c\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.378881 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.380627 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-config\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.381497 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.382609 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.385947 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.441976 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lvsj\" (UniqueName: \"kubernetes.io/projected/fa6bef81-a325-412f-b2c5-80d6b904abd3-kube-api-access-9lvsj\") pod \"dnsmasq-dns-8b8cf6657-cp8q7\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.483781 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.483871 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.483920 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j26c\" (UniqueName: \"kubernetes.io/projected/198f274c-7d83-4c1d-9145-083c1987df6c-kube-api-access-4j26c\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.490285 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.495464 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.507631 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j26c\" (UniqueName: \"kubernetes.io/projected/198f274c-7d83-4c1d-9145-083c1987df6c-kube-api-access-4j26c\") pod \"nova-cell1-novncproxy-0\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.565751 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.625088 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.677682 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:23 crc kubenswrapper[4959]: I0121 13:32:23.962404 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.013184 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j649g"] Jan 21 13:32:24 crc kubenswrapper[4959]: W0121 13:32:24.028782 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a8e32a2_8454_4269_a372_96d891edcdda.slice/crio-439c906f780028399038916f589ed803fc5d66611b2944929bce3d7bb13dd6da WatchSource:0}: Error finding container 439c906f780028399038916f589ed803fc5d66611b2944929bce3d7bb13dd6da: Status 404 returned error can't find the container with id 439c906f780028399038916f589ed803fc5d66611b2944929bce3d7bb13dd6da Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.090620 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s4jcm"] Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.092054 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.095294 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.099562 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.101814 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s4jcm"] Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.138489 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:24 crc kubenswrapper[4959]: W0121 13:32:24.139087 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode41b2cea_29e2_4b7f_bcc6_b920099a3872.slice/crio-e4b2c48b3b37220a10a2830db82a243407b601585110a736414d659d632dd20d WatchSource:0}: Error finding container e4b2c48b3b37220a10a2830db82a243407b601585110a736414d659d632dd20d: Status 404 returned error can't find the container with id e4b2c48b3b37220a10a2830db82a243407b601585110a736414d659d632dd20d Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.204601 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.204765 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-config-data\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.204809 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhjp\" (UniqueName: \"kubernetes.io/projected/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-kube-api-access-cqhjp\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.204867 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-scripts\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.306518 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.306665 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-config-data\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.306764 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhjp\" (UniqueName: \"kubernetes.io/projected/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-kube-api-access-cqhjp\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.306841 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-scripts\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.310792 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:32:24 crc kubenswrapper[4959]: W0121 13:32:24.311176 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod198f274c_7d83_4c1d_9145_083c1987df6c.slice/crio-3f0a2104a2b2cc1c5c27229ba91cbbf9ac18fe28052e1c8dd476741e96722073 WatchSource:0}: Error finding container 3f0a2104a2b2cc1c5c27229ba91cbbf9ac18fe28052e1c8dd476741e96722073: Status 404 returned error can't find the container with id 3f0a2104a2b2cc1c5c27229ba91cbbf9ac18fe28052e1c8dd476741e96722073 Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.311699 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.312020 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-scripts\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.315826 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-config-data\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.336623 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhjp\" (UniqueName: \"kubernetes.io/projected/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-kube-api-access-cqhjp\") pod \"nova-cell1-conductor-db-sync-s4jcm\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.354579 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-cp8q7"] Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.364689 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.491027 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.811241 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j649g" event={"ID":"4a8e32a2-8454-4269-a372-96d891edcdda","Type":"ContainerStarted","Data":"39648aec8956149a5e232f9d3e2ce69a664ac1bf4dab5870f1f52d2a5646a833"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.811650 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j649g" event={"ID":"4a8e32a2-8454-4269-a372-96d891edcdda","Type":"ContainerStarted","Data":"439c906f780028399038916f589ed803fc5d66611b2944929bce3d7bb13dd6da"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.813416 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b08f48f-c239-4fe2-9f70-f35c0877fd64","Type":"ContainerStarted","Data":"a2444da6c9abec8d55f368c9f01b5c812ee93fb12eac922cb29f9be0b91b1155"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.816726 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" event={"ID":"fa6bef81-a325-412f-b2c5-80d6b904abd3","Type":"ContainerStarted","Data":"a413bbb02695193a9b8c2f7cd1531a34900c5193ca60f89cb7da4cdd2ff6b27e"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.818766 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"198f274c-7d83-4c1d-9145-083c1987df6c","Type":"ContainerStarted","Data":"3f0a2104a2b2cc1c5c27229ba91cbbf9ac18fe28052e1c8dd476741e96722073"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.821762 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4fbea3d-bd4f-4889-be56-186f88e4e96c","Type":"ContainerStarted","Data":"8e21a6c4664d543cdc43d13e4987f5293ebf944a188bd86a99c5b3dc8e11cc4c"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.822647 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e41b2cea-29e2-4b7f-bcc6-b920099a3872","Type":"ContainerStarted","Data":"e4b2c48b3b37220a10a2830db82a243407b601585110a736414d659d632dd20d"} Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.839393 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-j649g" podStartSLOduration=2.8393752340000002 podStartE2EDuration="2.839375234s" podCreationTimestamp="2026-01-21 13:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:24.831042944 +0000 UTC m=+1405.794073487" watchObservedRunningTime="2026-01-21 13:32:24.839375234 +0000 UTC m=+1405.802405777" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.881881 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 13:32:24 crc kubenswrapper[4959]: I0121 13:32:24.984742 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s4jcm"] Jan 21 13:32:25 crc kubenswrapper[4959]: I0121 13:32:25.847318 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" event={"ID":"aa5c2306-fe9b-475e-8b0e-9ecf06f69050","Type":"ContainerStarted","Data":"b41fe1ef7128f9dba919b6f450fb67cd62961492953991bcf72304f4a3d278c5"} Jan 21 13:32:25 crc kubenswrapper[4959]: I0121 13:32:25.847910 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" event={"ID":"aa5c2306-fe9b-475e-8b0e-9ecf06f69050","Type":"ContainerStarted","Data":"de3d89b7bfb08b0c30dad008119d1b1edd2bb7a68887b43b154fafa70cb0d714"} Jan 21 13:32:25 crc kubenswrapper[4959]: I0121 13:32:25.850774 4959 generic.go:334] "Generic (PLEG): container finished" podID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerID="cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929" exitCode=0 Jan 21 13:32:25 crc kubenswrapper[4959]: I0121 13:32:25.850849 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" event={"ID":"fa6bef81-a325-412f-b2c5-80d6b904abd3","Type":"ContainerDied","Data":"cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929"} Jan 21 13:32:25 crc kubenswrapper[4959]: I0121 13:32:25.880032 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" podStartSLOduration=1.8800103030000002 podStartE2EDuration="1.880010303s" podCreationTimestamp="2026-01-21 13:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:25.869474181 +0000 UTC m=+1406.832504734" watchObservedRunningTime="2026-01-21 13:32:25.880010303 +0000 UTC m=+1406.843040846" Jan 21 13:32:26 crc kubenswrapper[4959]: I0121 13:32:26.720519 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:32:26 crc kubenswrapper[4959]: I0121 13:32:26.730364 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:26 crc kubenswrapper[4959]: I0121 13:32:26.872693 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" event={"ID":"fa6bef81-a325-412f-b2c5-80d6b904abd3","Type":"ContainerStarted","Data":"046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab"} Jan 21 13:32:26 crc kubenswrapper[4959]: I0121 13:32:26.873374 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:26 crc kubenswrapper[4959]: I0121 13:32:26.901330 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" podStartSLOduration=3.901311927 podStartE2EDuration="3.901311927s" podCreationTimestamp="2026-01-21 13:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:26.900711121 +0000 UTC m=+1407.863741684" watchObservedRunningTime="2026-01-21 13:32:26.901311927 +0000 UTC m=+1407.864342470" Jan 21 13:32:29 crc kubenswrapper[4959]: I0121 13:32:29.419143 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:32:29 crc kubenswrapper[4959]: I0121 13:32:29.420075 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" containerName="kube-state-metrics" containerID="cri-o://483dd939c23f3cc8282b4883a07c0249ade7cd4e1ff75800984bce52ebaac94c" gracePeriod=30 Jan 21 13:32:29 crc kubenswrapper[4959]: I0121 13:32:29.904559 4959 generic.go:334] "Generic (PLEG): container finished" podID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" containerID="483dd939c23f3cc8282b4883a07c0249ade7cd4e1ff75800984bce52ebaac94c" exitCode=2 Jan 21 13:32:29 crc kubenswrapper[4959]: I0121 13:32:29.904607 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4c3e540-1be8-41f8-92e8-1371c406c6f2","Type":"ContainerDied","Data":"483dd939c23f3cc8282b4883a07c0249ade7cd4e1ff75800984bce52ebaac94c"} Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.344185 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.345374 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-central-agent" containerID="cri-o://749f06628660990176001c5674ad44fee6c98e817af25d1df2e06fa4c4d821d7" gracePeriod=30 Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.345460 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="proxy-httpd" containerID="cri-o://b2e1847b4543709a0278403a037d974f8a85d30ba0f742bf9d779eb1907b7b6d" gracePeriod=30 Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.345509 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="sg-core" containerID="cri-o://dff9d8338451537e27010d4ebfe0574cbc0cc276cf331df16906cc70d1be47d6" gracePeriod=30 Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.345502 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-notification-agent" containerID="cri-o://810b86129a96dda08a25ff8d0de5b6a68815627d7d87649add41508c4eaa3e50" gracePeriod=30 Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.968808 4959 generic.go:334] "Generic (PLEG): container finished" podID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerID="b2e1847b4543709a0278403a037d974f8a85d30ba0f742bf9d779eb1907b7b6d" exitCode=0 Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.969171 4959 generic.go:334] "Generic (PLEG): container finished" podID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerID="dff9d8338451537e27010d4ebfe0574cbc0cc276cf331df16906cc70d1be47d6" exitCode=2 Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.969196 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerDied","Data":"b2e1847b4543709a0278403a037d974f8a85d30ba0f742bf9d779eb1907b7b6d"} Jan 21 13:32:30 crc kubenswrapper[4959]: I0121 13:32:30.969228 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerDied","Data":"dff9d8338451537e27010d4ebfe0574cbc0cc276cf331df16906cc70d1be47d6"} Jan 21 13:32:31 crc kubenswrapper[4959]: I0121 13:32:31.983043 4959 generic.go:334] "Generic (PLEG): container finished" podID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerID="749f06628660990176001c5674ad44fee6c98e817af25d1df2e06fa4c4d821d7" exitCode=0 Jan 21 13:32:31 crc kubenswrapper[4959]: I0121 13:32:31.983109 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerDied","Data":"749f06628660990176001c5674ad44fee6c98e817af25d1df2e06fa4c4d821d7"} Jan 21 13:32:32 crc kubenswrapper[4959]: I0121 13:32:32.529273 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": dial tcp 10.217.0.103:8081: connect: connection refused" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.027650 4959 generic.go:334] "Generic (PLEG): container finished" podID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerID="810b86129a96dda08a25ff8d0de5b6a68815627d7d87649add41508c4eaa3e50" exitCode=0 Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.028203 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerDied","Data":"810b86129a96dda08a25ff8d0de5b6a68815627d7d87649add41508c4eaa3e50"} Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.132615 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.236049 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.284768 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v48zz\" (UniqueName: \"kubernetes.io/projected/c4c3e540-1be8-41f8-92e8-1371c406c6f2-kube-api-access-v48zz\") pod \"c4c3e540-1be8-41f8-92e8-1371c406c6f2\" (UID: \"c4c3e540-1be8-41f8-92e8-1371c406c6f2\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.290689 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c3e540-1be8-41f8-92e8-1371c406c6f2-kube-api-access-v48zz" (OuterVolumeSpecName: "kube-api-access-v48zz") pod "c4c3e540-1be8-41f8-92e8-1371c406c6f2" (UID: "c4c3e540-1be8-41f8-92e8-1371c406c6f2"). InnerVolumeSpecName "kube-api-access-v48zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387179 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xl88\" (UniqueName: \"kubernetes.io/projected/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-kube-api-access-8xl88\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387300 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-sg-core-conf-yaml\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387355 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-combined-ca-bundle\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387393 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-scripts\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387447 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-run-httpd\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387550 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-log-httpd\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.387599 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-config-data\") pod \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\" (UID: \"ccabc79a-7b30-4e5f-8b1f-a978016f0d54\") " Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.388412 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v48zz\" (UniqueName: \"kubernetes.io/projected/c4c3e540-1be8-41f8-92e8-1371c406c6f2-kube-api-access-v48zz\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.413909 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.422294 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-kube-api-access-8xl88" (OuterVolumeSpecName: "kube-api-access-8xl88") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "kube-api-access-8xl88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.422835 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.430139 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-scripts" (OuterVolumeSpecName: "scripts") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.464137 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.490346 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.490394 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xl88\" (UniqueName: \"kubernetes.io/projected/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-kube-api-access-8xl88\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.490406 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.490417 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.490432 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.533256 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.566062 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-config-data" (OuterVolumeSpecName: "config-data") pod "ccabc79a-7b30-4e5f-8b1f-a978016f0d54" (UID: "ccabc79a-7b30-4e5f-8b1f-a978016f0d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.593407 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.593466 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccabc79a-7b30-4e5f-8b1f-a978016f0d54-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.627942 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.712760 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-p22bw"] Jan 21 13:32:33 crc kubenswrapper[4959]: I0121 13:32:33.713138 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" podUID="00cdbda8-c419-4768-ac68-598950ed9387" containerName="dnsmasq-dns" containerID="cri-o://9cec4098898f9ea337b9a4b23290bd85a8da4a4f8ede77a700bbda1843a7b6cb" gracePeriod=10 Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.038559 4959 generic.go:334] "Generic (PLEG): container finished" podID="4a8e32a2-8454-4269-a372-96d891edcdda" containerID="39648aec8956149a5e232f9d3e2ce69a664ac1bf4dab5870f1f52d2a5646a833" exitCode=0 Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.038640 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j649g" event={"ID":"4a8e32a2-8454-4269-a372-96d891edcdda","Type":"ContainerDied","Data":"39648aec8956149a5e232f9d3e2ce69a664ac1bf4dab5870f1f52d2a5646a833"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.041231 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b08f48f-c239-4fe2-9f70-f35c0877fd64","Type":"ContainerStarted","Data":"657fd1f68cc14d3c50a8462039f273e6bd3aef2806b8e525e2357e5f49175561"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.074335 4959 generic.go:334] "Generic (PLEG): container finished" podID="00cdbda8-c419-4768-ac68-598950ed9387" containerID="9cec4098898f9ea337b9a4b23290bd85a8da4a4f8ede77a700bbda1843a7b6cb" exitCode=0 Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.074453 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" event={"ID":"00cdbda8-c419-4768-ac68-598950ed9387","Type":"ContainerDied","Data":"9cec4098898f9ea337b9a4b23290bd85a8da4a4f8ede77a700bbda1843a7b6cb"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.081928 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"198f274c-7d83-4c1d-9145-083c1987df6c","Type":"ContainerStarted","Data":"42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.082755 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="198f274c-7d83-4c1d-9145-083c1987df6c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec" gracePeriod=30 Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.091713 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccabc79a-7b30-4e5f-8b1f-a978016f0d54","Type":"ContainerDied","Data":"f24dd9863a9f96ac81098353823951ebd32da83da49d61bb7b75352094ffa05f"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.091786 4959 scope.go:117] "RemoveContainer" containerID="b2e1847b4543709a0278403a037d974f8a85d30ba0f742bf9d779eb1907b7b6d" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.092286 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.099529 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4c3e540-1be8-41f8-92e8-1371c406c6f2","Type":"ContainerDied","Data":"225cb1bc8068e07aaf11369dfb849935a5cff9d68f639de3cc28bd8f018f6908"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.099710 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.116880 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4fbea3d-bd4f-4889-be56-186f88e4e96c","Type":"ContainerStarted","Data":"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.116942 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4fbea3d-bd4f-4889-be56-186f88e4e96c","Type":"ContainerStarted","Data":"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.121064 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e41b2cea-29e2-4b7f-bcc6-b920099a3872","Type":"ContainerStarted","Data":"2065a98e4ace8696e443ad70187e5279759f526e7fa159e7ed38008f2878f8af"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.121193 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e41b2cea-29e2-4b7f-bcc6-b920099a3872","Type":"ContainerStarted","Data":"4518c035663666e1a5db5d442824ecc9f3dda10e2fefdea55acb8ff90002feaf"} Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.121244 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-log" containerID="cri-o://4518c035663666e1a5db5d442824ecc9f3dda10e2fefdea55acb8ff90002feaf" gracePeriod=30 Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.121357 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-metadata" containerID="cri-o://2065a98e4ace8696e443ad70187e5279759f526e7fa159e7ed38008f2878f8af" gracePeriod=30 Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.128600 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.743246682 podStartE2EDuration="12.128577881s" podCreationTimestamp="2026-01-21 13:32:22 +0000 UTC" firstStartedPulling="2026-01-21 13:32:24.359939805 +0000 UTC m=+1405.322970338" lastFinishedPulling="2026-01-21 13:32:32.745270994 +0000 UTC m=+1413.708301537" observedRunningTime="2026-01-21 13:32:34.111726665 +0000 UTC m=+1415.074757228" watchObservedRunningTime="2026-01-21 13:32:34.128577881 +0000 UTC m=+1415.091608424" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.191006 4959 scope.go:117] "RemoveContainer" containerID="dff9d8338451537e27010d4ebfe0574cbc0cc276cf331df16906cc70d1be47d6" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.269225 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.8415481099999997 podStartE2EDuration="11.269087307s" podCreationTimestamp="2026-01-21 13:32:23 +0000 UTC" firstStartedPulling="2026-01-21 13:32:24.316844093 +0000 UTC m=+1405.279874636" lastFinishedPulling="2026-01-21 13:32:32.74438329 +0000 UTC m=+1413.707413833" observedRunningTime="2026-01-21 13:32:34.145231401 +0000 UTC m=+1415.108261964" watchObservedRunningTime="2026-01-21 13:32:34.269087307 +0000 UTC m=+1415.232117850" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.291523 4959 scope.go:117] "RemoveContainer" containerID="810b86129a96dda08a25ff8d0de5b6a68815627d7d87649add41508c4eaa3e50" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.307522 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.319986 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.338863 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.339235 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="proxy-httpd" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339258 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="proxy-httpd" Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.339282 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" containerName="kube-state-metrics" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339288 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" containerName="kube-state-metrics" Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.339294 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="sg-core" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339300 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="sg-core" Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.339311 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-central-agent" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339317 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-central-agent" Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.339330 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-notification-agent" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339335 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-notification-agent" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339589 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" containerName="kube-state-metrics" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339615 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="sg-core" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339630 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-notification-agent" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339649 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="proxy-httpd" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.339662 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" containerName="ceilometer-central-agent" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.342189 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.345957 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.348169 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6czf7" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.349062 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.364785 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.377172 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.387050 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.392019 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.396900 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.397297 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cdbda8-c419-4768-ac68-598950ed9387" containerName="dnsmasq-dns" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.397315 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cdbda8-c419-4768-ac68-598950ed9387" containerName="dnsmasq-dns" Jan 21 13:32:34 crc kubenswrapper[4959]: E0121 13:32:34.397341 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cdbda8-c419-4768-ac68-598950ed9387" containerName="init" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.397348 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cdbda8-c419-4768-ac68-598950ed9387" containerName="init" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.397502 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cdbda8-c419-4768-ac68-598950ed9387" containerName="dnsmasq-dns" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.399084 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.400222 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.796530335 podStartE2EDuration="12.400212683s" podCreationTimestamp="2026-01-21 13:32:22 +0000 UTC" firstStartedPulling="2026-01-21 13:32:24.141530295 +0000 UTC m=+1405.104560838" lastFinishedPulling="2026-01-21 13:32:32.745212643 +0000 UTC m=+1413.708243186" observedRunningTime="2026-01-21 13:32:34.255259894 +0000 UTC m=+1415.218290437" watchObservedRunningTime="2026-01-21 13:32:34.400212683 +0000 UTC m=+1415.363243226" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.401664 4959 scope.go:117] "RemoveContainer" containerID="749f06628660990176001c5674ad44fee6c98e817af25d1df2e06fa4c4d821d7" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.404597 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.410707 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.412466 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.425152 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.637055834 podStartE2EDuration="12.425133862s" podCreationTimestamp="2026-01-21 13:32:22 +0000 UTC" firstStartedPulling="2026-01-21 13:32:23.957128985 +0000 UTC m=+1404.920159528" lastFinishedPulling="2026-01-21 13:32:32.745207013 +0000 UTC m=+1413.708237556" observedRunningTime="2026-01-21 13:32:34.291905298 +0000 UTC m=+1415.254935841" watchObservedRunningTime="2026-01-21 13:32:34.425133862 +0000 UTC m=+1415.388164405" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.425687 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.428585 4959 scope.go:117] "RemoveContainer" containerID="483dd939c23f3cc8282b4883a07c0249ade7cd4e1ff75800984bce52ebaac94c" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.524904 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-sb\") pod \"00cdbda8-c419-4768-ac68-598950ed9387\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.524955 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p2sf\" (UniqueName: \"kubernetes.io/projected/00cdbda8-c419-4768-ac68-598950ed9387-kube-api-access-6p2sf\") pod \"00cdbda8-c419-4768-ac68-598950ed9387\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.525153 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-dns-svc\") pod \"00cdbda8-c419-4768-ac68-598950ed9387\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.525229 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-config\") pod \"00cdbda8-c419-4768-ac68-598950ed9387\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.525272 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-nb\") pod \"00cdbda8-c419-4768-ac68-598950ed9387\" (UID: \"00cdbda8-c419-4768-ac68-598950ed9387\") " Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527161 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527219 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527289 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-config-data\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527357 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mll7\" (UniqueName: \"kubernetes.io/projected/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-kube-api-access-7mll7\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527446 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527528 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mq6\" (UniqueName: \"kubernetes.io/projected/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-api-access-b9mq6\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527585 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527609 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-run-httpd\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527663 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-scripts\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527688 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527768 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-log-httpd\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.527791 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.538303 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cdbda8-c419-4768-ac68-598950ed9387-kube-api-access-6p2sf" (OuterVolumeSpecName: "kube-api-access-6p2sf") pod "00cdbda8-c419-4768-ac68-598950ed9387" (UID: "00cdbda8-c419-4768-ac68-598950ed9387"). InnerVolumeSpecName "kube-api-access-6p2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.588194 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00cdbda8-c419-4768-ac68-598950ed9387" (UID: "00cdbda8-c419-4768-ac68-598950ed9387"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.596380 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00cdbda8-c419-4768-ac68-598950ed9387" (UID: "00cdbda8-c419-4768-ac68-598950ed9387"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.596762 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00cdbda8-c419-4768-ac68-598950ed9387" (UID: "00cdbda8-c419-4768-ac68-598950ed9387"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.618969 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-config" (OuterVolumeSpecName: "config") pod "00cdbda8-c419-4768-ac68-598950ed9387" (UID: "00cdbda8-c419-4768-ac68-598950ed9387"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629575 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-log-httpd\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629620 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629683 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629700 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629724 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-config-data\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629749 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mll7\" (UniqueName: \"kubernetes.io/projected/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-kube-api-access-7mll7\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629782 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629811 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mq6\" (UniqueName: \"kubernetes.io/projected/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-api-access-b9mq6\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629834 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629850 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-run-httpd\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629872 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-scripts\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629887 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629943 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629953 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p2sf\" (UniqueName: \"kubernetes.io/projected/00cdbda8-c419-4768-ac68-598950ed9387-kube-api-access-6p2sf\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629966 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629974 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.629982 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cdbda8-c419-4768-ac68-598950ed9387-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.633000 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-log-httpd\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.633234 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-run-httpd\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.634743 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.635229 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.636987 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-scripts\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.638832 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.639585 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-config-data\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.640578 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.642197 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.642662 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.646856 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mll7\" (UniqueName: \"kubernetes.io/projected/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-kube-api-access-7mll7\") pod \"ceilometer-0\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " pod="openstack/ceilometer-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.650207 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mq6\" (UniqueName: \"kubernetes.io/projected/aaca072b-9250-426c-8ce9-0982f870f2c0-kube-api-access-b9mq6\") pod \"kube-state-metrics-0\" (UID: \"aaca072b-9250-426c-8ce9-0982f870f2c0\") " pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.711768 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:32:34 crc kubenswrapper[4959]: I0121 13:32:34.747842 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.149135 4959 generic.go:334] "Generic (PLEG): container finished" podID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerID="2065a98e4ace8696e443ad70187e5279759f526e7fa159e7ed38008f2878f8af" exitCode=0 Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.149607 4959 generic.go:334] "Generic (PLEG): container finished" podID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerID="4518c035663666e1a5db5d442824ecc9f3dda10e2fefdea55acb8ff90002feaf" exitCode=143 Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.149317 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e41b2cea-29e2-4b7f-bcc6-b920099a3872","Type":"ContainerDied","Data":"2065a98e4ace8696e443ad70187e5279759f526e7fa159e7ed38008f2878f8af"} Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.149693 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e41b2cea-29e2-4b7f-bcc6-b920099a3872","Type":"ContainerDied","Data":"4518c035663666e1a5db5d442824ecc9f3dda10e2fefdea55acb8ff90002feaf"} Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.154375 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.154450 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-p22bw" event={"ID":"00cdbda8-c419-4768-ac68-598950ed9387","Type":"ContainerDied","Data":"710cdd360c2d5ec5dc75f75c1924f9bd5b848239cfa04976d668f864755e2a6f"} Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.154511 4959 scope.go:117] "RemoveContainer" containerID="9cec4098898f9ea337b9a4b23290bd85a8da4a4f8ede77a700bbda1843a7b6cb" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.194413 4959 scope.go:117] "RemoveContainer" containerID="e1bcedb026f22690f05b7582e9f5509fab6521b601ce73b5ff5ac06cfff4101a" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.217575 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-p22bw"] Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.228602 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-p22bw"] Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.323075 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cdbda8-c419-4768-ac68-598950ed9387" path="/var/lib/kubelet/pods/00cdbda8-c419-4768-ac68-598950ed9387/volumes" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.324302 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c3e540-1be8-41f8-92e8-1371c406c6f2" path="/var/lib/kubelet/pods/c4c3e540-1be8-41f8-92e8-1371c406c6f2/volumes" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.324962 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccabc79a-7b30-4e5f-8b1f-a978016f0d54" path="/var/lib/kubelet/pods/ccabc79a-7b30-4e5f-8b1f-a978016f0d54/volumes" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.327561 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.394056 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.537328 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.676764 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.708719 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-scripts\") pod \"4a8e32a2-8454-4269-a372-96d891edcdda\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.708865 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-combined-ca-bundle\") pod \"4a8e32a2-8454-4269-a372-96d891edcdda\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.708976 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-config-data\") pod \"4a8e32a2-8454-4269-a372-96d891edcdda\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.709020 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76b29\" (UniqueName: \"kubernetes.io/projected/4a8e32a2-8454-4269-a372-96d891edcdda-kube-api-access-76b29\") pod \"4a8e32a2-8454-4269-a372-96d891edcdda\" (UID: \"4a8e32a2-8454-4269-a372-96d891edcdda\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.722427 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8e32a2-8454-4269-a372-96d891edcdda-kube-api-access-76b29" (OuterVolumeSpecName: "kube-api-access-76b29") pod "4a8e32a2-8454-4269-a372-96d891edcdda" (UID: "4a8e32a2-8454-4269-a372-96d891edcdda"). InnerVolumeSpecName "kube-api-access-76b29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.723776 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-scripts" (OuterVolumeSpecName: "scripts") pod "4a8e32a2-8454-4269-a372-96d891edcdda" (UID: "4a8e32a2-8454-4269-a372-96d891edcdda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.743444 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-config-data" (OuterVolumeSpecName: "config-data") pod "4a8e32a2-8454-4269-a372-96d891edcdda" (UID: "4a8e32a2-8454-4269-a372-96d891edcdda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.759270 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a8e32a2-8454-4269-a372-96d891edcdda" (UID: "4a8e32a2-8454-4269-a372-96d891edcdda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.810837 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41b2cea-29e2-4b7f-bcc6-b920099a3872-logs\") pod \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811127 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-config-data\") pod \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811171 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vl6\" (UniqueName: \"kubernetes.io/projected/e41b2cea-29e2-4b7f-bcc6-b920099a3872-kube-api-access-z9vl6\") pod \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811224 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-combined-ca-bundle\") pod \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\" (UID: \"e41b2cea-29e2-4b7f-bcc6-b920099a3872\") " Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811400 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41b2cea-29e2-4b7f-bcc6-b920099a3872-logs" (OuterVolumeSpecName: "logs") pod "e41b2cea-29e2-4b7f-bcc6-b920099a3872" (UID: "e41b2cea-29e2-4b7f-bcc6-b920099a3872"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811883 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811913 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76b29\" (UniqueName: \"kubernetes.io/projected/4a8e32a2-8454-4269-a372-96d891edcdda-kube-api-access-76b29\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811930 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41b2cea-29e2-4b7f-bcc6-b920099a3872-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811942 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.811954 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8e32a2-8454-4269-a372-96d891edcdda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.819469 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41b2cea-29e2-4b7f-bcc6-b920099a3872-kube-api-access-z9vl6" (OuterVolumeSpecName: "kube-api-access-z9vl6") pod "e41b2cea-29e2-4b7f-bcc6-b920099a3872" (UID: "e41b2cea-29e2-4b7f-bcc6-b920099a3872"). InnerVolumeSpecName "kube-api-access-z9vl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.842278 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e41b2cea-29e2-4b7f-bcc6-b920099a3872" (UID: "e41b2cea-29e2-4b7f-bcc6-b920099a3872"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.876561 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-config-data" (OuterVolumeSpecName: "config-data") pod "e41b2cea-29e2-4b7f-bcc6-b920099a3872" (UID: "e41b2cea-29e2-4b7f-bcc6-b920099a3872"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.914079 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.914145 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9vl6\" (UniqueName: \"kubernetes.io/projected/e41b2cea-29e2-4b7f-bcc6-b920099a3872-kube-api-access-z9vl6\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:35 crc kubenswrapper[4959]: I0121 13:32:35.914157 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41b2cea-29e2-4b7f-bcc6-b920099a3872-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.165762 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e41b2cea-29e2-4b7f-bcc6-b920099a3872","Type":"ContainerDied","Data":"e4b2c48b3b37220a10a2830db82a243407b601585110a736414d659d632dd20d"} Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.165839 4959 scope.go:117] "RemoveContainer" containerID="2065a98e4ace8696e443ad70187e5279759f526e7fa159e7ed38008f2878f8af" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.166157 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.168641 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j649g" event={"ID":"4a8e32a2-8454-4269-a372-96d891edcdda","Type":"ContainerDied","Data":"439c906f780028399038916f589ed803fc5d66611b2944929bce3d7bb13dd6da"} Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.168689 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439c906f780028399038916f589ed803fc5d66611b2944929bce3d7bb13dd6da" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.168656 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j649g" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.171077 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aaca072b-9250-426c-8ce9-0982f870f2c0","Type":"ContainerStarted","Data":"af05abdca9a23208dc815d7d079f8f4e814bba8a8c5c95e1c4a77cdcba59fe85"} Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.171138 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aaca072b-9250-426c-8ce9-0982f870f2c0","Type":"ContainerStarted","Data":"b473f3a1d6aaf5b1d2fe118e08fc9960088529312d3e260d6b9f619b22abe5b8"} Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.172084 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.177659 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerStarted","Data":"771e3351945a2d061851b38d132b39c64739b4241463374d5ca83d6ffa45da1b"} Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.177707 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerStarted","Data":"6f113ed0b1600c4620f7ef2c0764a20db4c661dc201634c25ba6b8e1f84a3cff"} Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.253669 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.817473688 podStartE2EDuration="2.2536493s" podCreationTimestamp="2026-01-21 13:32:34 +0000 UTC" firstStartedPulling="2026-01-21 13:32:35.309199822 +0000 UTC m=+1416.272230365" lastFinishedPulling="2026-01-21 13:32:35.745375424 +0000 UTC m=+1416.708405977" observedRunningTime="2026-01-21 13:32:36.2507451 +0000 UTC m=+1417.213775643" watchObservedRunningTime="2026-01-21 13:32:36.2536493 +0000 UTC m=+1417.216679843" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.266629 4959 scope.go:117] "RemoveContainer" containerID="4518c035663666e1a5db5d442824ecc9f3dda10e2fefdea55acb8ff90002feaf" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.292068 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.306986 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.325460 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: E0121 13:32:36.326222 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-log" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.326362 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-log" Jan 21 13:32:36 crc kubenswrapper[4959]: E0121 13:32:36.326459 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8e32a2-8454-4269-a372-96d891edcdda" containerName="nova-manage" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.326545 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8e32a2-8454-4269-a372-96d891edcdda" containerName="nova-manage" Jan 21 13:32:36 crc kubenswrapper[4959]: E0121 13:32:36.326636 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-metadata" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.326705 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-metadata" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.327051 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-log" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.327169 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8e32a2-8454-4269-a372-96d891edcdda" containerName="nova-manage" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.327253 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" containerName="nova-metadata-metadata" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.328513 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.333638 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.334079 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.334630 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.418280 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.419005 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-api" containerID="cri-o://6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0" gracePeriod=30 Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.419223 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-log" containerID="cri-o://e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850" gracePeriod=30 Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.442385 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.442809 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0b08f48f-c239-4fe2-9f70-f35c0877fd64" containerName="nova-scheduler-scheduler" containerID="cri-o://657fd1f68cc14d3c50a8462039f273e6bd3aef2806b8e525e2357e5f49175561" gracePeriod=30 Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.458347 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:36 crc kubenswrapper[4959]: E0121 13:32:36.459193 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-jnpdb logs nova-metadata-tls-certs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-metadata-0" podUID="ed66642c-90ef-47bc-ad6e-79c4b88352f8" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.464979 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.465085 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-config-data\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.465140 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnpdb\" (UniqueName: \"kubernetes.io/projected/ed66642c-90ef-47bc-ad6e-79c4b88352f8-kube-api-access-jnpdb\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.466332 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed66642c-90ef-47bc-ad6e-79c4b88352f8-logs\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.466383 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.568212 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-config-data\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.568288 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnpdb\" (UniqueName: \"kubernetes.io/projected/ed66642c-90ef-47bc-ad6e-79c4b88352f8-kube-api-access-jnpdb\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.568335 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed66642c-90ef-47bc-ad6e-79c4b88352f8-logs\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.568386 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.568532 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.571580 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed66642c-90ef-47bc-ad6e-79c4b88352f8-logs\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.576826 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.577081 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-config-data\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.577746 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:36 crc kubenswrapper[4959]: I0121 13:32:36.594236 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnpdb\" (UniqueName: \"kubernetes.io/projected/ed66642c-90ef-47bc-ad6e-79c4b88352f8-kube-api-access-jnpdb\") pod \"nova-metadata-0\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " pod="openstack/nova-metadata-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.020695 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.180161 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-config-data\") pod \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.180217 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-combined-ca-bundle\") pod \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.180405 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6st8\" (UniqueName: \"kubernetes.io/projected/c4fbea3d-bd4f-4889-be56-186f88e4e96c-kube-api-access-q6st8\") pod \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.180436 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4fbea3d-bd4f-4889-be56-186f88e4e96c-logs\") pod \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\" (UID: \"c4fbea3d-bd4f-4889-be56-186f88e4e96c\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.181388 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4fbea3d-bd4f-4889-be56-186f88e4e96c-logs" (OuterVolumeSpecName: "logs") pod "c4fbea3d-bd4f-4889-be56-186f88e4e96c" (UID: "c4fbea3d-bd4f-4889-be56-186f88e4e96c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.192068 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fbea3d-bd4f-4889-be56-186f88e4e96c-kube-api-access-q6st8" (OuterVolumeSpecName: "kube-api-access-q6st8") pod "c4fbea3d-bd4f-4889-be56-186f88e4e96c" (UID: "c4fbea3d-bd4f-4889-be56-186f88e4e96c"). InnerVolumeSpecName "kube-api-access-q6st8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.199124 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerStarted","Data":"9eaed876d31dff088d9077f4e375a2d036fcf785658a096cfc79ff760ad1510f"} Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.201501 4959 generic.go:334] "Generic (PLEG): container finished" podID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerID="6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0" exitCode=0 Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.201541 4959 generic.go:334] "Generic (PLEG): container finished" podID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerID="e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850" exitCode=143 Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.201639 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.203691 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4fbea3d-bd4f-4889-be56-186f88e4e96c","Type":"ContainerDied","Data":"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0"} Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.203864 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4fbea3d-bd4f-4889-be56-186f88e4e96c","Type":"ContainerDied","Data":"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850"} Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.203968 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4fbea3d-bd4f-4889-be56-186f88e4e96c","Type":"ContainerDied","Data":"8e21a6c4664d543cdc43d13e4987f5293ebf944a188bd86a99c5b3dc8e11cc4c"} Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.204125 4959 scope.go:117] "RemoveContainer" containerID="6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.205727 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.209685 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-config-data" (OuterVolumeSpecName: "config-data") pod "c4fbea3d-bd4f-4889-be56-186f88e4e96c" (UID: "c4fbea3d-bd4f-4889-be56-186f88e4e96c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.224772 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4fbea3d-bd4f-4889-be56-186f88e4e96c" (UID: "c4fbea3d-bd4f-4889-be56-186f88e4e96c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.233487 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.242571 4959 scope.go:117] "RemoveContainer" containerID="e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.264392 4959 scope.go:117] "RemoveContainer" containerID="6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0" Jan 21 13:32:37 crc kubenswrapper[4959]: E0121 13:32:37.264900 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0\": container with ID starting with 6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0 not found: ID does not exist" containerID="6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.265534 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0"} err="failed to get container status \"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0\": rpc error: code = NotFound desc = could not find container \"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0\": container with ID starting with 6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0 not found: ID does not exist" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.265591 4959 scope.go:117] "RemoveContainer" containerID="e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850" Jan 21 13:32:37 crc kubenswrapper[4959]: E0121 13:32:37.265948 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850\": container with ID starting with e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850 not found: ID does not exist" containerID="e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.265974 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850"} err="failed to get container status \"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850\": rpc error: code = NotFound desc = could not find container \"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850\": container with ID starting with e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850 not found: ID does not exist" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.265990 4959 scope.go:117] "RemoveContainer" containerID="6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.266288 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0"} err="failed to get container status \"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0\": rpc error: code = NotFound desc = could not find container \"6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0\": container with ID starting with 6b310689115eaaf4ce8f028fc08476f5f3e4cfade8245bc82437696a00342db0 not found: ID does not exist" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.266312 4959 scope.go:117] "RemoveContainer" containerID="e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.266584 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850"} err="failed to get container status \"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850\": rpc error: code = NotFound desc = could not find container \"e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850\": container with ID starting with e0538f3fcc2834dfd1a0410f79753f833713fe8c2fecffa266ef00e19f8da850 not found: ID does not exist" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.282862 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.282901 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fbea3d-bd4f-4889-be56-186f88e4e96c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.282913 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6st8\" (UniqueName: \"kubernetes.io/projected/c4fbea3d-bd4f-4889-be56-186f88e4e96c-kube-api-access-q6st8\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.282922 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4fbea3d-bd4f-4889-be56-186f88e4e96c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.309948 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41b2cea-29e2-4b7f-bcc6-b920099a3872" path="/var/lib/kubelet/pods/e41b2cea-29e2-4b7f-bcc6-b920099a3872/volumes" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.384398 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed66642c-90ef-47bc-ad6e-79c4b88352f8-logs\") pod \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.384495 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-combined-ca-bundle\") pod \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.384712 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnpdb\" (UniqueName: \"kubernetes.io/projected/ed66642c-90ef-47bc-ad6e-79c4b88352f8-kube-api-access-jnpdb\") pod \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.384738 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-config-data\") pod \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.384840 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-nova-metadata-tls-certs\") pod \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\" (UID: \"ed66642c-90ef-47bc-ad6e-79c4b88352f8\") " Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.385188 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed66642c-90ef-47bc-ad6e-79c4b88352f8-logs" (OuterVolumeSpecName: "logs") pod "ed66642c-90ef-47bc-ad6e-79c4b88352f8" (UID: "ed66642c-90ef-47bc-ad6e-79c4b88352f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.386314 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed66642c-90ef-47bc-ad6e-79c4b88352f8-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.387811 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed66642c-90ef-47bc-ad6e-79c4b88352f8" (UID: "ed66642c-90ef-47bc-ad6e-79c4b88352f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.388466 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ed66642c-90ef-47bc-ad6e-79c4b88352f8" (UID: "ed66642c-90ef-47bc-ad6e-79c4b88352f8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.389306 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed66642c-90ef-47bc-ad6e-79c4b88352f8-kube-api-access-jnpdb" (OuterVolumeSpecName: "kube-api-access-jnpdb") pod "ed66642c-90ef-47bc-ad6e-79c4b88352f8" (UID: "ed66642c-90ef-47bc-ad6e-79c4b88352f8"). InnerVolumeSpecName "kube-api-access-jnpdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.395891 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-config-data" (OuterVolumeSpecName: "config-data") pod "ed66642c-90ef-47bc-ad6e-79c4b88352f8" (UID: "ed66642c-90ef-47bc-ad6e-79c4b88352f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.487911 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnpdb\" (UniqueName: \"kubernetes.io/projected/ed66642c-90ef-47bc-ad6e-79c4b88352f8-kube-api-access-jnpdb\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.487944 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.487955 4959 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.487965 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed66642c-90ef-47bc-ad6e-79c4b88352f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.539185 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.553181 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.567170 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:37 crc kubenswrapper[4959]: E0121 13:32:37.567589 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-log" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.567606 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-log" Jan 21 13:32:37 crc kubenswrapper[4959]: E0121 13:32:37.567619 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-api" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.567625 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-api" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.567820 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-api" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.567851 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" containerName="nova-api-log" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.568750 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.572225 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.581621 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.603942 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg278\" (UniqueName: \"kubernetes.io/projected/c8470858-af94-473d-bc0e-0621299d9503-kube-api-access-cg278\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.603978 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-config-data\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.604037 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.604068 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8470858-af94-473d-bc0e-0621299d9503-logs\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.707807 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg278\" (UniqueName: \"kubernetes.io/projected/c8470858-af94-473d-bc0e-0621299d9503-kube-api-access-cg278\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.707862 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-config-data\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.708323 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.708382 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8470858-af94-473d-bc0e-0621299d9503-logs\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.708772 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8470858-af94-473d-bc0e-0621299d9503-logs\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.711924 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-config-data\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.712368 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.729073 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg278\" (UniqueName: \"kubernetes.io/projected/c8470858-af94-473d-bc0e-0621299d9503-kube-api-access-cg278\") pod \"nova-api-0\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " pod="openstack/nova-api-0" Jan 21 13:32:37 crc kubenswrapper[4959]: I0121 13:32:37.906374 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.214604 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerStarted","Data":"679ac28c1447a1dcc34f8bbdfeb469eea56f79aacd2998edc50f7cc26e20ffea"} Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.215845 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.268252 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.279376 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.566174 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.653603 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.655846 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.666002 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.670298 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.685620 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.693568 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.827229 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.827306 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-config-data\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.827447 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38325378-8a06-4b6f-8de3-41705b371331-logs\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.827799 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrtw\" (UniqueName: \"kubernetes.io/projected/38325378-8a06-4b6f-8de3-41705b371331-kube-api-access-dwrtw\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.827850 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.929853 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrtw\" (UniqueName: \"kubernetes.io/projected/38325378-8a06-4b6f-8de3-41705b371331-kube-api-access-dwrtw\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.929900 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.929947 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.929968 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-config-data\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.929993 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38325378-8a06-4b6f-8de3-41705b371331-logs\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.930527 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38325378-8a06-4b6f-8de3-41705b371331-logs\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.938966 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-config-data\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.939238 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.939422 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.948400 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrtw\" (UniqueName: \"kubernetes.io/projected/38325378-8a06-4b6f-8de3-41705b371331-kube-api-access-dwrtw\") pod \"nova-metadata-0\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " pod="openstack/nova-metadata-0" Jan 21 13:32:38 crc kubenswrapper[4959]: I0121 13:32:38.994519 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:32:39 crc kubenswrapper[4959]: I0121 13:32:39.008976 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:32:39 crc kubenswrapper[4959]: I0121 13:32:39.238692 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8470858-af94-473d-bc0e-0621299d9503","Type":"ContainerStarted","Data":"b666f55329806850447eadc6180155661d47c770b1c2102285e596abd97ded26"} Jan 21 13:32:39 crc kubenswrapper[4959]: I0121 13:32:39.306139 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fbea3d-bd4f-4889-be56-186f88e4e96c" path="/var/lib/kubelet/pods/c4fbea3d-bd4f-4889-be56-186f88e4e96c/volumes" Jan 21 13:32:39 crc kubenswrapper[4959]: I0121 13:32:39.308651 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed66642c-90ef-47bc-ad6e-79c4b88352f8" path="/var/lib/kubelet/pods/ed66642c-90ef-47bc-ad6e-79c4b88352f8/volumes" Jan 21 13:32:39 crc kubenswrapper[4959]: I0121 13:32:39.470736 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.246814 4959 generic.go:334] "Generic (PLEG): container finished" podID="aa5c2306-fe9b-475e-8b0e-9ecf06f69050" containerID="b41fe1ef7128f9dba919b6f450fb67cd62961492953991bcf72304f4a3d278c5" exitCode=0 Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.246915 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" event={"ID":"aa5c2306-fe9b-475e-8b0e-9ecf06f69050","Type":"ContainerDied","Data":"b41fe1ef7128f9dba919b6f450fb67cd62961492953991bcf72304f4a3d278c5"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.252405 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerStarted","Data":"11cd5ef15d063a8eb9f9cda522e5feda55c535822931980aec4df5e3d8c35f90"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.252865 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.254304 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38325378-8a06-4b6f-8de3-41705b371331","Type":"ContainerStarted","Data":"0dc8a031e7c20f51405359cee25acbe3ad1c1c0ba777f75e84f09346f999d826"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.254378 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38325378-8a06-4b6f-8de3-41705b371331","Type":"ContainerStarted","Data":"79bfde0ad493ab04463861b02a745efa0ca5ed2363b0ca6c87888302ae699d72"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.254391 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38325378-8a06-4b6f-8de3-41705b371331","Type":"ContainerStarted","Data":"3bed7d6ae05d12f2bc91626f661129a8310771e7b723595eeb48dc1686896c03"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.256340 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8470858-af94-473d-bc0e-0621299d9503","Type":"ContainerStarted","Data":"009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.256375 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8470858-af94-473d-bc0e-0621299d9503","Type":"ContainerStarted","Data":"2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618"} Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.288330 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.466330832 podStartE2EDuration="6.28831446s" podCreationTimestamp="2026-01-21 13:32:34 +0000 UTC" firstStartedPulling="2026-01-21 13:32:35.412432637 +0000 UTC m=+1416.375463180" lastFinishedPulling="2026-01-21 13:32:39.234416255 +0000 UTC m=+1420.197446808" observedRunningTime="2026-01-21 13:32:40.285386769 +0000 UTC m=+1421.248417312" watchObservedRunningTime="2026-01-21 13:32:40.28831446 +0000 UTC m=+1421.251345003" Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.305267 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.3052465189999998 podStartE2EDuration="3.305246519s" podCreationTimestamp="2026-01-21 13:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:40.301561227 +0000 UTC m=+1421.264591770" watchObservedRunningTime="2026-01-21 13:32:40.305246519 +0000 UTC m=+1421.268277072" Jan 21 13:32:40 crc kubenswrapper[4959]: I0121 13:32:40.339919 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.339898557 podStartE2EDuration="2.339898557s" podCreationTimestamp="2026-01-21 13:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:40.328030309 +0000 UTC m=+1421.291060862" watchObservedRunningTime="2026-01-21 13:32:40.339898557 +0000 UTC m=+1421.302929110" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.589319 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.680901 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-scripts\") pod \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.681068 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-config-data\") pod \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.681145 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqhjp\" (UniqueName: \"kubernetes.io/projected/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-kube-api-access-cqhjp\") pod \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.681203 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-combined-ca-bundle\") pod \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\" (UID: \"aa5c2306-fe9b-475e-8b0e-9ecf06f69050\") " Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.687414 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-scripts" (OuterVolumeSpecName: "scripts") pod "aa5c2306-fe9b-475e-8b0e-9ecf06f69050" (UID: "aa5c2306-fe9b-475e-8b0e-9ecf06f69050"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.687421 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-kube-api-access-cqhjp" (OuterVolumeSpecName: "kube-api-access-cqhjp") pod "aa5c2306-fe9b-475e-8b0e-9ecf06f69050" (UID: "aa5c2306-fe9b-475e-8b0e-9ecf06f69050"). InnerVolumeSpecName "kube-api-access-cqhjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.712087 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-config-data" (OuterVolumeSpecName: "config-data") pod "aa5c2306-fe9b-475e-8b0e-9ecf06f69050" (UID: "aa5c2306-fe9b-475e-8b0e-9ecf06f69050"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.714292 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa5c2306-fe9b-475e-8b0e-9ecf06f69050" (UID: "aa5c2306-fe9b-475e-8b0e-9ecf06f69050"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.783210 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.783547 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqhjp\" (UniqueName: \"kubernetes.io/projected/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-kube-api-access-cqhjp\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.783562 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:41 crc kubenswrapper[4959]: I0121 13:32:41.783570 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5c2306-fe9b-475e-8b0e-9ecf06f69050-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.276157 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.278213 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s4jcm" event={"ID":"aa5c2306-fe9b-475e-8b0e-9ecf06f69050","Type":"ContainerDied","Data":"de3d89b7bfb08b0c30dad008119d1b1edd2bb7a68887b43b154fafa70cb0d714"} Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.278277 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3d89b7bfb08b0c30dad008119d1b1edd2bb7a68887b43b154fafa70cb0d714" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.350564 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 13:32:42 crc kubenswrapper[4959]: E0121 13:32:42.350938 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5c2306-fe9b-475e-8b0e-9ecf06f69050" containerName="nova-cell1-conductor-db-sync" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.350955 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5c2306-fe9b-475e-8b0e-9ecf06f69050" containerName="nova-cell1-conductor-db-sync" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.351142 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5c2306-fe9b-475e-8b0e-9ecf06f69050" containerName="nova-cell1-conductor-db-sync" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.351672 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.355729 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.367779 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.514972 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da62b0b7-89d1-4932-a64e-004d8aa58035-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.515334 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da62b0b7-89d1-4932-a64e-004d8aa58035-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.515446 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxxf\" (UniqueName: \"kubernetes.io/projected/da62b0b7-89d1-4932-a64e-004d8aa58035-kube-api-access-wwxxf\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.616664 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da62b0b7-89d1-4932-a64e-004d8aa58035-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.616718 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxxf\" (UniqueName: \"kubernetes.io/projected/da62b0b7-89d1-4932-a64e-004d8aa58035-kube-api-access-wwxxf\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.616796 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da62b0b7-89d1-4932-a64e-004d8aa58035-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.621029 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da62b0b7-89d1-4932-a64e-004d8aa58035-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.626655 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da62b0b7-89d1-4932-a64e-004d8aa58035-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.637578 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxxf\" (UniqueName: \"kubernetes.io/projected/da62b0b7-89d1-4932-a64e-004d8aa58035-kube-api-access-wwxxf\") pod \"nova-cell1-conductor-0\" (UID: \"da62b0b7-89d1-4932-a64e-004d8aa58035\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:42 crc kubenswrapper[4959]: I0121 13:32:42.695690 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:43 crc kubenswrapper[4959]: I0121 13:32:43.115437 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 13:32:43 crc kubenswrapper[4959]: W0121 13:32:43.116047 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda62b0b7_89d1_4932_a64e_004d8aa58035.slice/crio-bbaf07b07ddc51daef2b77d249b0501bc66fdeca7d39602499f1fb19a86e8fcd WatchSource:0}: Error finding container bbaf07b07ddc51daef2b77d249b0501bc66fdeca7d39602499f1fb19a86e8fcd: Status 404 returned error can't find the container with id bbaf07b07ddc51daef2b77d249b0501bc66fdeca7d39602499f1fb19a86e8fcd Jan 21 13:32:43 crc kubenswrapper[4959]: I0121 13:32:43.354235 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"da62b0b7-89d1-4932-a64e-004d8aa58035","Type":"ContainerStarted","Data":"bbaf07b07ddc51daef2b77d249b0501bc66fdeca7d39602499f1fb19a86e8fcd"} Jan 21 13:32:43 crc kubenswrapper[4959]: I0121 13:32:43.995116 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:32:43 crc kubenswrapper[4959]: I0121 13:32:43.995191 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:32:44 crc kubenswrapper[4959]: I0121 13:32:44.359116 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"da62b0b7-89d1-4932-a64e-004d8aa58035","Type":"ContainerStarted","Data":"13c8e3f7b65eeeab661a49104e3234a3bdd1eb20ec69bd97ebd7b4d8b32fe09e"} Jan 21 13:32:44 crc kubenswrapper[4959]: I0121 13:32:44.360477 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:44 crc kubenswrapper[4959]: I0121 13:32:44.384443 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.38442502 podStartE2EDuration="2.38442502s" podCreationTimestamp="2026-01-21 13:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:32:44.37718532 +0000 UTC m=+1425.340215863" watchObservedRunningTime="2026-01-21 13:32:44.38442502 +0000 UTC m=+1425.347455563" Jan 21 13:32:44 crc kubenswrapper[4959]: I0121 13:32:44.727655 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 13:32:47 crc kubenswrapper[4959]: I0121 13:32:47.907116 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:32:47 crc kubenswrapper[4959]: I0121 13:32:47.907453 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:32:48 crc kubenswrapper[4959]: I0121 13:32:48.989278 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:32:48 crc kubenswrapper[4959]: I0121 13:32:48.989313 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:32:48 crc kubenswrapper[4959]: I0121 13:32:48.994910 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:32:48 crc kubenswrapper[4959]: I0121 13:32:48.994987 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:32:50 crc kubenswrapper[4959]: I0121 13:32:50.011332 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:32:50 crc kubenswrapper[4959]: I0121 13:32:50.011626 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:32:52 crc kubenswrapper[4959]: I0121 13:32:52.730195 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 13:32:57 crc kubenswrapper[4959]: I0121 13:32:57.912150 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:32:57 crc kubenswrapper[4959]: I0121 13:32:57.913015 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:32:57 crc kubenswrapper[4959]: I0121 13:32:57.913540 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:32:57 crc kubenswrapper[4959]: I0121 13:32:57.915876 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.490637 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.493545 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.670305 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-jc4q6"] Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.672140 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.693399 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-jc4q6"] Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.823792 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.823842 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jhh\" (UniqueName: \"kubernetes.io/projected/5688a156-f093-490c-832c-59254c10ba03-kube-api-access-75jhh\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.823898 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.823921 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-config\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.824023 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.925809 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.925911 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.925951 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jhh\" (UniqueName: \"kubernetes.io/projected/5688a156-f093-490c-832c-59254c10ba03-kube-api-access-75jhh\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.926008 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.926031 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-config\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.926786 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.927033 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-config\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.927730 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.927733 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.946525 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jhh\" (UniqueName: \"kubernetes.io/projected/5688a156-f093-490c-832c-59254c10ba03-kube-api-access-75jhh\") pod \"dnsmasq-dns-68d4b6d797-jc4q6\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:58 crc kubenswrapper[4959]: I0121 13:32:58.995537 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:32:59 crc kubenswrapper[4959]: I0121 13:32:59.000272 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:32:59 crc kubenswrapper[4959]: I0121 13:32:59.006552 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:32:59 crc kubenswrapper[4959]: I0121 13:32:59.006767 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:32:59 crc kubenswrapper[4959]: I0121 13:32:59.505113 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:32:59 crc kubenswrapper[4959]: I0121 13:32:59.596486 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-jc4q6"] Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.509213 4959 generic.go:334] "Generic (PLEG): container finished" podID="5688a156-f093-490c-832c-59254c10ba03" containerID="215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb" exitCode=0 Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.509316 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" event={"ID":"5688a156-f093-490c-832c-59254c10ba03","Type":"ContainerDied","Data":"215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb"} Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.509603 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" event={"ID":"5688a156-f093-490c-832c-59254c10ba03","Type":"ContainerStarted","Data":"ac02e7d1ef085b45273439a9181766bfcb960a20d1767976062431fc5c8577fd"} Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.875787 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.876301 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-central-agent" containerID="cri-o://771e3351945a2d061851b38d132b39c64739b4241463374d5ca83d6ffa45da1b" gracePeriod=30 Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.876417 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="proxy-httpd" containerID="cri-o://11cd5ef15d063a8eb9f9cda522e5feda55c535822931980aec4df5e3d8c35f90" gracePeriod=30 Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.876454 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="sg-core" containerID="cri-o://679ac28c1447a1dcc34f8bbdfeb469eea56f79aacd2998edc50f7cc26e20ffea" gracePeriod=30 Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.876502 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-notification-agent" containerID="cri-o://9eaed876d31dff088d9077f4e375a2d036fcf785658a096cfc79ff760ad1510f" gracePeriod=30 Jan 21 13:33:00 crc kubenswrapper[4959]: I0121 13:33:00.881203 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:3000/\": EOF" Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.176660 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.518689 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" event={"ID":"5688a156-f093-490c-832c-59254c10ba03","Type":"ContainerStarted","Data":"ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d"} Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.518809 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.520794 4959 generic.go:334] "Generic (PLEG): container finished" podID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerID="11cd5ef15d063a8eb9f9cda522e5feda55c535822931980aec4df5e3d8c35f90" exitCode=0 Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.520827 4959 generic.go:334] "Generic (PLEG): container finished" podID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerID="679ac28c1447a1dcc34f8bbdfeb469eea56f79aacd2998edc50f7cc26e20ffea" exitCode=2 Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.520837 4959 generic.go:334] "Generic (PLEG): container finished" podID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerID="771e3351945a2d061851b38d132b39c64739b4241463374d5ca83d6ffa45da1b" exitCode=0 Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.520865 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerDied","Data":"11cd5ef15d063a8eb9f9cda522e5feda55c535822931980aec4df5e3d8c35f90"} Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.520904 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerDied","Data":"679ac28c1447a1dcc34f8bbdfeb469eea56f79aacd2998edc50f7cc26e20ffea"} Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.520921 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerDied","Data":"771e3351945a2d061851b38d132b39c64739b4241463374d5ca83d6ffa45da1b"} Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.521035 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-log" containerID="cri-o://2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618" gracePeriod=30 Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.521079 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-api" containerID="cri-o://009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917" gracePeriod=30 Jan 21 13:33:01 crc kubenswrapper[4959]: I0121 13:33:01.545870 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" podStartSLOduration=3.5458450299999997 podStartE2EDuration="3.54584503s" podCreationTimestamp="2026-01-21 13:32:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:01.538162003 +0000 UTC m=+1442.501192556" watchObservedRunningTime="2026-01-21 13:33:01.54584503 +0000 UTC m=+1442.508875573" Jan 21 13:33:02 crc kubenswrapper[4959]: I0121 13:33:02.531844 4959 generic.go:334] "Generic (PLEG): container finished" podID="c8470858-af94-473d-bc0e-0621299d9503" containerID="2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618" exitCode=143 Jan 21 13:33:02 crc kubenswrapper[4959]: I0121 13:33:02.531934 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8470858-af94-473d-bc0e-0621299d9503","Type":"ContainerDied","Data":"2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618"} Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.419517 4959 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3ab75e_788e_44ce_8f31_3125d4b6e96b.slice/crio-9eaed876d31dff088d9077f4e375a2d036fcf785658a096cfc79ff760ad1510f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.531566 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.553913 4959 generic.go:334] "Generic (PLEG): container finished" podID="198f274c-7d83-4c1d-9145-083c1987df6c" containerID="42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec" exitCode=137 Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.553994 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"198f274c-7d83-4c1d-9145-083c1987df6c","Type":"ContainerDied","Data":"42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec"} Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.554029 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"198f274c-7d83-4c1d-9145-083c1987df6c","Type":"ContainerDied","Data":"3f0a2104a2b2cc1c5c27229ba91cbbf9ac18fe28052e1c8dd476741e96722073"} Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.554053 4959 scope.go:117] "RemoveContainer" containerID="42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.554234 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.566992 4959 generic.go:334] "Generic (PLEG): container finished" podID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerID="9eaed876d31dff088d9077f4e375a2d036fcf785658a096cfc79ff760ad1510f" exitCode=0 Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.567043 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerDied","Data":"9eaed876d31dff088d9077f4e375a2d036fcf785658a096cfc79ff760ad1510f"} Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.595281 4959 scope.go:117] "RemoveContainer" containerID="42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec" Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.595678 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec\": container with ID starting with 42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec not found: ID does not exist" containerID="42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.595727 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec"} err="failed to get container status \"42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec\": rpc error: code = NotFound desc = could not find container \"42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec\": container with ID starting with 42eea2cab3596eeffccbebd891f1c3796b20e341247277e33a936d487c08cbec not found: ID does not exist" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.639405 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j26c\" (UniqueName: \"kubernetes.io/projected/198f274c-7d83-4c1d-9145-083c1987df6c-kube-api-access-4j26c\") pod \"198f274c-7d83-4c1d-9145-083c1987df6c\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.639573 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-combined-ca-bundle\") pod \"198f274c-7d83-4c1d-9145-083c1987df6c\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.639641 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-config-data\") pod \"198f274c-7d83-4c1d-9145-083c1987df6c\" (UID: \"198f274c-7d83-4c1d-9145-083c1987df6c\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.653702 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198f274c-7d83-4c1d-9145-083c1987df6c-kube-api-access-4j26c" (OuterVolumeSpecName: "kube-api-access-4j26c") pod "198f274c-7d83-4c1d-9145-083c1987df6c" (UID: "198f274c-7d83-4c1d-9145-083c1987df6c"). InnerVolumeSpecName "kube-api-access-4j26c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.655967 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.670356 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-config-data" (OuterVolumeSpecName: "config-data") pod "198f274c-7d83-4c1d-9145-083c1987df6c" (UID: "198f274c-7d83-4c1d-9145-083c1987df6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.672964 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "198f274c-7d83-4c1d-9145-083c1987df6c" (UID: "198f274c-7d83-4c1d-9145-083c1987df6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.742839 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.742887 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198f274c-7d83-4c1d-9145-083c1987df6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.742901 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j26c\" (UniqueName: \"kubernetes.io/projected/198f274c-7d83-4c1d-9145-083c1987df6c-kube-api-access-4j26c\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.844212 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-scripts\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846315 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-run-httpd\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846419 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-combined-ca-bundle\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846535 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-sg-core-conf-yaml\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846568 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-log-httpd\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846640 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mll7\" (UniqueName: \"kubernetes.io/projected/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-kube-api-access-7mll7\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846691 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-ceilometer-tls-certs\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.846773 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-config-data\") pod \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\" (UID: \"2b3ab75e-788e-44ce-8f31-3125d4b6e96b\") " Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.847067 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.847679 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.847970 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.848144 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-scripts" (OuterVolumeSpecName: "scripts") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.853322 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-kube-api-access-7mll7" (OuterVolumeSpecName: "kube-api-access-7mll7") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "kube-api-access-7mll7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.878954 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.906113 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.918087 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.920369 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.937263 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.937855 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-notification-agent" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.937983 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-notification-agent" Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.938078 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="sg-core" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.938167 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="sg-core" Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.938245 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198f274c-7d83-4c1d-9145-083c1987df6c" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.938319 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="198f274c-7d83-4c1d-9145-083c1987df6c" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.938486 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-central-agent" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.938564 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-central-agent" Jan 21 13:33:04 crc kubenswrapper[4959]: E0121 13:33:04.938640 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="proxy-httpd" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.938702 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="proxy-httpd" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.938965 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-central-agent" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.939043 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="ceilometer-notification-agent" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.939147 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="sg-core" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.939239 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="198f274c-7d83-4c1d-9145-083c1987df6c" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.939330 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" containerName="proxy-httpd" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.940124 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.943516 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.945640 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.945666 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.945856 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948142 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948198 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948238 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948269 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5vz\" (UniqueName: \"kubernetes.io/projected/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-kube-api-access-nx5vz\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948298 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948475 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948504 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948518 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948530 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mll7\" (UniqueName: \"kubernetes.io/projected/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-kube-api-access-7mll7\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.948543 4959 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.970575 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:04 crc kubenswrapper[4959]: I0121 13:33:04.974771 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-config-data" (OuterVolumeSpecName: "config-data") pod "2b3ab75e-788e-44ce-8f31-3125d4b6e96b" (UID: "2b3ab75e-788e-44ce-8f31-3125d4b6e96b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.049815 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.049900 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.049925 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5vz\" (UniqueName: \"kubernetes.io/projected/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-kube-api-access-nx5vz\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.049963 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.050068 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.050154 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.050170 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3ab75e-788e-44ce-8f31-3125d4b6e96b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.053505 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.053833 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.054461 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.054823 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.071908 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5vz\" (UniqueName: \"kubernetes.io/projected/3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897-kube-api-access-nx5vz\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.075010 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.151573 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8470858-af94-473d-bc0e-0621299d9503-logs\") pod \"c8470858-af94-473d-bc0e-0621299d9503\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.151767 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-combined-ca-bundle\") pod \"c8470858-af94-473d-bc0e-0621299d9503\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.151828 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-config-data\") pod \"c8470858-af94-473d-bc0e-0621299d9503\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.151926 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg278\" (UniqueName: \"kubernetes.io/projected/c8470858-af94-473d-bc0e-0621299d9503-kube-api-access-cg278\") pod \"c8470858-af94-473d-bc0e-0621299d9503\" (UID: \"c8470858-af94-473d-bc0e-0621299d9503\") " Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.152373 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8470858-af94-473d-bc0e-0621299d9503-logs" (OuterVolumeSpecName: "logs") pod "c8470858-af94-473d-bc0e-0621299d9503" (UID: "c8470858-af94-473d-bc0e-0621299d9503"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.156444 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8470858-af94-473d-bc0e-0621299d9503-kube-api-access-cg278" (OuterVolumeSpecName: "kube-api-access-cg278") pod "c8470858-af94-473d-bc0e-0621299d9503" (UID: "c8470858-af94-473d-bc0e-0621299d9503"). InnerVolumeSpecName "kube-api-access-cg278". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.188374 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-config-data" (OuterVolumeSpecName: "config-data") pod "c8470858-af94-473d-bc0e-0621299d9503" (UID: "c8470858-af94-473d-bc0e-0621299d9503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.199515 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8470858-af94-473d-bc0e-0621299d9503" (UID: "c8470858-af94-473d-bc0e-0621299d9503"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.255555 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.256044 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8470858-af94-473d-bc0e-0621299d9503-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.256071 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg278\" (UniqueName: \"kubernetes.io/projected/c8470858-af94-473d-bc0e-0621299d9503-kube-api-access-cg278\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.256114 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8470858-af94-473d-bc0e-0621299d9503-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.266206 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.363938 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198f274c-7d83-4c1d-9145-083c1987df6c" path="/var/lib/kubelet/pods/198f274c-7d83-4c1d-9145-083c1987df6c/volumes" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.590200 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b3ab75e-788e-44ce-8f31-3125d4b6e96b","Type":"ContainerDied","Data":"6f113ed0b1600c4620f7ef2c0764a20db4c661dc201634c25ba6b8e1f84a3cff"} Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.590283 4959 scope.go:117] "RemoveContainer" containerID="11cd5ef15d063a8eb9f9cda522e5feda55c535822931980aec4df5e3d8c35f90" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.590459 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.593478 4959 generic.go:334] "Generic (PLEG): container finished" podID="c8470858-af94-473d-bc0e-0621299d9503" containerID="009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917" exitCode=0 Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.593525 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8470858-af94-473d-bc0e-0621299d9503","Type":"ContainerDied","Data":"009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917"} Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.593551 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8470858-af94-473d-bc0e-0621299d9503","Type":"ContainerDied","Data":"b666f55329806850447eadc6180155661d47c770b1c2102285e596abd97ded26"} Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.593616 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.618554 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.630332 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.636847 4959 scope.go:117] "RemoveContainer" containerID="679ac28c1447a1dcc34f8bbdfeb469eea56f79aacd2998edc50f7cc26e20ffea" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.641438 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.653945 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.667478 4959 scope.go:117] "RemoveContainer" containerID="9eaed876d31dff088d9077f4e375a2d036fcf785658a096cfc79ff760ad1510f" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.667951 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: E0121 13:33:05.668382 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-api" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.668403 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-api" Jan 21 13:33:05 crc kubenswrapper[4959]: E0121 13:33:05.668423 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-log" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.668433 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-log" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.668609 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-log" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.668637 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8470858-af94-473d-bc0e-0621299d9503" containerName="nova-api-api" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.674303 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.680976 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.681417 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.681573 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.692768 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.698778 4959 scope.go:117] "RemoveContainer" containerID="771e3351945a2d061851b38d132b39c64739b4241463374d5ca83d6ffa45da1b" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.703439 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.705363 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.711799 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.712265 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.711883 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.731046 4959 scope.go:117] "RemoveContainer" containerID="009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.748428 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771480 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcxb\" (UniqueName: \"kubernetes.io/projected/2e0e478b-6faf-4540-ae97-30b2c6b019cd-kube-api-access-vzcxb\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771560 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-public-tls-certs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771587 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771647 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771702 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771720 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-config-data\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771760 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-config-data\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771816 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-scripts\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771855 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771899 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771932 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9km9f\" (UniqueName: \"kubernetes.io/projected/bf899835-6388-4051-a969-141f848c1e47-kube-api-access-9km9f\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.771977 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf899835-6388-4051-a969-141f848c1e47-logs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.772009 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.772056 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.784425 4959 scope.go:117] "RemoveContainer" containerID="2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.812562 4959 scope.go:117] "RemoveContainer" containerID="009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917" Jan 21 13:33:05 crc kubenswrapper[4959]: E0121 13:33:05.812984 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917\": container with ID starting with 009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917 not found: ID does not exist" containerID="009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.813022 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917"} err="failed to get container status \"009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917\": rpc error: code = NotFound desc = could not find container \"009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917\": container with ID starting with 009c6ca30f51d98d8dbe89e4dc014e68bbdac02e45d2e20b3464822fab38b917 not found: ID does not exist" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.813051 4959 scope.go:117] "RemoveContainer" containerID="2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618" Jan 21 13:33:05 crc kubenswrapper[4959]: E0121 13:33:05.813258 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618\": container with ID starting with 2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618 not found: ID does not exist" containerID="2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.813285 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618"} err="failed to get container status \"2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618\": rpc error: code = NotFound desc = could not find container \"2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618\": container with ID starting with 2b6069e40c3456805a2e84b582722bba6e6a60a072d8b7e099a98ea7edca6618 not found: ID does not exist" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.832316 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874311 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874368 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874404 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9km9f\" (UniqueName: \"kubernetes.io/projected/bf899835-6388-4051-a969-141f848c1e47-kube-api-access-9km9f\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874432 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf899835-6388-4051-a969-141f848c1e47-logs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874456 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874491 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874555 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcxb\" (UniqueName: \"kubernetes.io/projected/2e0e478b-6faf-4540-ae97-30b2c6b019cd-kube-api-access-vzcxb\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874623 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-public-tls-certs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874645 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874688 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874723 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874744 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-config-data\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874775 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-config-data\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.874822 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-scripts\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.875130 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf899835-6388-4051-a969-141f848c1e47-logs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.875825 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-log-httpd\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.877750 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-run-httpd\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.883912 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-scripts\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.894012 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.894536 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-public-tls-certs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.895337 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.895671 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-config-data\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.895703 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.895932 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-config-data\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.897509 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9km9f\" (UniqueName: \"kubernetes.io/projected/bf899835-6388-4051-a969-141f848c1e47-kube-api-access-9km9f\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.898002 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcxb\" (UniqueName: \"kubernetes.io/projected/2e0e478b-6faf-4540-ae97-30b2c6b019cd-kube-api-access-vzcxb\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.898208 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " pod="openstack/ceilometer-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.912710 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " pod="openstack/nova-api-0" Jan 21 13:33:05 crc kubenswrapper[4959]: I0121 13:33:05.999505 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.024566 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.515274 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.586921 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.609703 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897","Type":"ContainerStarted","Data":"a60452ea2ce4fb9c67397dacb1c5a5a577ee62007ca432065bc021217ccd0506"} Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.612657 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897","Type":"ContainerStarted","Data":"ce3a34b9b1fbd4d978cc4c9dd28216cbc5c28c763739faf78cb7986a9d5cd27f"} Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.640566 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerStarted","Data":"f06e52c26a5bdc52661ec44a97f64bd61ccfa8f2af8ba23b6563b89fdc85cfb9"} Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.641958 4959 generic.go:334] "Generic (PLEG): container finished" podID="0b08f48f-c239-4fe2-9f70-f35c0877fd64" containerID="657fd1f68cc14d3c50a8462039f273e6bd3aef2806b8e525e2357e5f49175561" exitCode=137 Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.642016 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b08f48f-c239-4fe2-9f70-f35c0877fd64","Type":"ContainerDied","Data":"657fd1f68cc14d3c50a8462039f273e6bd3aef2806b8e525e2357e5f49175561"} Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.649441 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6494089880000002 podStartE2EDuration="2.649408988s" podCreationTimestamp="2026-01-21 13:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:06.646595469 +0000 UTC m=+1447.609626012" watchObservedRunningTime="2026-01-21 13:33:06.649408988 +0000 UTC m=+1447.612439531" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.773370 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqq62"] Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.778459 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.792189 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqq62"] Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.888676 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.899118 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvmsr\" (UniqueName: \"kubernetes.io/projected/0b08f48f-c239-4fe2-9f70-f35c0877fd64-kube-api-access-lvmsr\") pod \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.899208 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-config-data\") pod \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.899440 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrbn\" (UniqueName: \"kubernetes.io/projected/912ae55e-5977-41bc-8e7b-9e0fee202d9e-kube-api-access-hjrbn\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.899546 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-catalog-content\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.899574 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-utilities\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.910475 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b08f48f-c239-4fe2-9f70-f35c0877fd64-kube-api-access-lvmsr" (OuterVolumeSpecName: "kube-api-access-lvmsr") pod "0b08f48f-c239-4fe2-9f70-f35c0877fd64" (UID: "0b08f48f-c239-4fe2-9f70-f35c0877fd64"). InnerVolumeSpecName "kube-api-access-lvmsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:06 crc kubenswrapper[4959]: I0121 13:33:06.947252 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-config-data" (OuterVolumeSpecName: "config-data") pod "0b08f48f-c239-4fe2-9f70-f35c0877fd64" (UID: "0b08f48f-c239-4fe2-9f70-f35c0877fd64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.000880 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-combined-ca-bundle\") pod \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\" (UID: \"0b08f48f-c239-4fe2-9f70-f35c0877fd64\") " Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001303 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-catalog-content\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001353 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-utilities\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001422 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrbn\" (UniqueName: \"kubernetes.io/projected/912ae55e-5977-41bc-8e7b-9e0fee202d9e-kube-api-access-hjrbn\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001544 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001566 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvmsr\" (UniqueName: \"kubernetes.io/projected/0b08f48f-c239-4fe2-9f70-f35c0877fd64-kube-api-access-lvmsr\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001919 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-catalog-content\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.001958 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-utilities\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.024908 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrbn\" (UniqueName: \"kubernetes.io/projected/912ae55e-5977-41bc-8e7b-9e0fee202d9e-kube-api-access-hjrbn\") pod \"redhat-operators-zqq62\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.037245 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b08f48f-c239-4fe2-9f70-f35c0877fd64" (UID: "0b08f48f-c239-4fe2-9f70-f35c0877fd64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.102827 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b08f48f-c239-4fe2-9f70-f35c0877fd64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.115662 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.311890 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3ab75e-788e-44ce-8f31-3125d4b6e96b" path="/var/lib/kubelet/pods/2b3ab75e-788e-44ce-8f31-3125d4b6e96b/volumes" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.313518 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8470858-af94-473d-bc0e-0621299d9503" path="/var/lib/kubelet/pods/c8470858-af94-473d-bc0e-0621299d9503/volumes" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.614816 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqq62"] Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.653473 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerStarted","Data":"769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2"} Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.654709 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b08f48f-c239-4fe2-9f70-f35c0877fd64","Type":"ContainerDied","Data":"a2444da6c9abec8d55f368c9f01b5c812ee93fb12eac922cb29f9be0b91b1155"} Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.654745 4959 scope.go:117] "RemoveContainer" containerID="657fd1f68cc14d3c50a8462039f273e6bd3aef2806b8e525e2357e5f49175561" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.654881 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.660160 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqq62" event={"ID":"912ae55e-5977-41bc-8e7b-9e0fee202d9e","Type":"ContainerStarted","Data":"4b62dea88aa08b934f0b8735142edbce7c72124cbb1373eca4966ce8b122273a"} Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.669680 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf899835-6388-4051-a969-141f848c1e47","Type":"ContainerStarted","Data":"8045fbc89f8d0c74657ae512b7aa9457088fc5034b49e87acf4ba1ea5b5a9de0"} Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.669725 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf899835-6388-4051-a969-141f848c1e47","Type":"ContainerStarted","Data":"1464c55e7758d240870b3237aeeeaa893c83ac174058f5684f804aa5eec70103"} Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.669738 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf899835-6388-4051-a969-141f848c1e47","Type":"ContainerStarted","Data":"6e516a7763cc914776aa6d5a4d41882c8e246774a6e9e9f896bd763722ce8812"} Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.691662 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6916446560000002 podStartE2EDuration="2.691644656s" podCreationTimestamp="2026-01-21 13:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:07.690757351 +0000 UTC m=+1448.653787894" watchObservedRunningTime="2026-01-21 13:33:07.691644656 +0000 UTC m=+1448.654675209" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.760147 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.794153 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.803904 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:07 crc kubenswrapper[4959]: E0121 13:33:07.804373 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b08f48f-c239-4fe2-9f70-f35c0877fd64" containerName="nova-scheduler-scheduler" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.804399 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b08f48f-c239-4fe2-9f70-f35c0877fd64" containerName="nova-scheduler-scheduler" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.804599 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b08f48f-c239-4fe2-9f70-f35c0877fd64" containerName="nova-scheduler-scheduler" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.805221 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.809470 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.817044 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.930135 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.930218 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-config-data\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:07 crc kubenswrapper[4959]: I0121 13:33:07.930249 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhr9\" (UniqueName: \"kubernetes.io/projected/ecad0604-dd95-4e22-80ec-fdb69f194e13-kube-api-access-qjhr9\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.031932 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.032385 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-config-data\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.032427 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhr9\" (UniqueName: \"kubernetes.io/projected/ecad0604-dd95-4e22-80ec-fdb69f194e13-kube-api-access-qjhr9\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.036694 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.037319 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-config-data\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.051980 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhr9\" (UniqueName: \"kubernetes.io/projected/ecad0604-dd95-4e22-80ec-fdb69f194e13-kube-api-access-qjhr9\") pod \"nova-scheduler-0\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.200728 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.702928 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerStarted","Data":"103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65"} Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.710075 4959 generic.go:334] "Generic (PLEG): container finished" podID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerID="c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac" exitCode=0 Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.711602 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqq62" event={"ID":"912ae55e-5977-41bc-8e7b-9e0fee202d9e","Type":"ContainerDied","Data":"c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac"} Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.766649 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:08 crc kubenswrapper[4959]: W0121 13:33:08.766701 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecad0604_dd95_4e22_80ec_fdb69f194e13.slice/crio-07ca54f446c529fcdedfc67f89041e912e0090f39d73cf0e22dd486400aecbaf WatchSource:0}: Error finding container 07ca54f446c529fcdedfc67f89041e912e0090f39d73cf0e22dd486400aecbaf: Status 404 returned error can't find the container with id 07ca54f446c529fcdedfc67f89041e912e0090f39d73cf0e22dd486400aecbaf Jan 21 13:33:08 crc kubenswrapper[4959]: I0121 13:33:08.998258 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.063410 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-cp8q7"] Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.063685 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerName="dnsmasq-dns" containerID="cri-o://046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab" gracePeriod=10 Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.302551 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b08f48f-c239-4fe2-9f70-f35c0877fd64" path="/var/lib/kubelet/pods/0b08f48f-c239-4fe2-9f70-f35c0877fd64/volumes" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.653122 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.738283 4959 generic.go:334] "Generic (PLEG): container finished" podID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerID="046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab" exitCode=0 Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.738342 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" event={"ID":"fa6bef81-a325-412f-b2c5-80d6b904abd3","Type":"ContainerDied","Data":"046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab"} Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.738361 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.738378 4959 scope.go:117] "RemoveContainer" containerID="046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.738367 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-cp8q7" event={"ID":"fa6bef81-a325-412f-b2c5-80d6b904abd3","Type":"ContainerDied","Data":"a413bbb02695193a9b8c2f7cd1531a34900c5193ca60f89cb7da4cdd2ff6b27e"} Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.739639 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecad0604-dd95-4e22-80ec-fdb69f194e13","Type":"ContainerStarted","Data":"cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e"} Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.739666 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecad0604-dd95-4e22-80ec-fdb69f194e13","Type":"ContainerStarted","Data":"07ca54f446c529fcdedfc67f89041e912e0090f39d73cf0e22dd486400aecbaf"} Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.742885 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerStarted","Data":"2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889"} Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.765558 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.765511002 podStartE2EDuration="2.765511002s" podCreationTimestamp="2026-01-21 13:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:09.755976533 +0000 UTC m=+1450.719007076" watchObservedRunningTime="2026-01-21 13:33:09.765511002 +0000 UTC m=+1450.728541545" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.769396 4959 scope.go:117] "RemoveContainer" containerID="cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.774911 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-dns-svc\") pod \"fa6bef81-a325-412f-b2c5-80d6b904abd3\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.775033 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-sb\") pod \"fa6bef81-a325-412f-b2c5-80d6b904abd3\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.775131 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-config\") pod \"fa6bef81-a325-412f-b2c5-80d6b904abd3\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.775186 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lvsj\" (UniqueName: \"kubernetes.io/projected/fa6bef81-a325-412f-b2c5-80d6b904abd3-kube-api-access-9lvsj\") pod \"fa6bef81-a325-412f-b2c5-80d6b904abd3\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.775501 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-nb\") pod \"fa6bef81-a325-412f-b2c5-80d6b904abd3\" (UID: \"fa6bef81-a325-412f-b2c5-80d6b904abd3\") " Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.796328 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6bef81-a325-412f-b2c5-80d6b904abd3-kube-api-access-9lvsj" (OuterVolumeSpecName: "kube-api-access-9lvsj") pod "fa6bef81-a325-412f-b2c5-80d6b904abd3" (UID: "fa6bef81-a325-412f-b2c5-80d6b904abd3"). InnerVolumeSpecName "kube-api-access-9lvsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.805372 4959 scope.go:117] "RemoveContainer" containerID="046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab" Jan 21 13:33:09 crc kubenswrapper[4959]: E0121 13:33:09.809803 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab\": container with ID starting with 046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab not found: ID does not exist" containerID="046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.809853 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab"} err="failed to get container status \"046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab\": rpc error: code = NotFound desc = could not find container \"046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab\": container with ID starting with 046a9f84ab18e9d5f17a636cc11d1592386db14cefd7c015dac6d916066224ab not found: ID does not exist" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.809881 4959 scope.go:117] "RemoveContainer" containerID="cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929" Jan 21 13:33:09 crc kubenswrapper[4959]: E0121 13:33:09.810561 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929\": container with ID starting with cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929 not found: ID does not exist" containerID="cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.810609 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929"} err="failed to get container status \"cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929\": rpc error: code = NotFound desc = could not find container \"cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929\": container with ID starting with cabfda19d810729aa1f0148290705f52b2bec0f163170c5a159110951c171929 not found: ID does not exist" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.849366 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa6bef81-a325-412f-b2c5-80d6b904abd3" (UID: "fa6bef81-a325-412f-b2c5-80d6b904abd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.877725 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.877759 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lvsj\" (UniqueName: \"kubernetes.io/projected/fa6bef81-a325-412f-b2c5-80d6b904abd3-kube-api-access-9lvsj\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.925874 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa6bef81-a325-412f-b2c5-80d6b904abd3" (UID: "fa6bef81-a325-412f-b2c5-80d6b904abd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.926439 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-config" (OuterVolumeSpecName: "config") pod "fa6bef81-a325-412f-b2c5-80d6b904abd3" (UID: "fa6bef81-a325-412f-b2c5-80d6b904abd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.945815 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa6bef81-a325-412f-b2c5-80d6b904abd3" (UID: "fa6bef81-a325-412f-b2c5-80d6b904abd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.979193 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.979225 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:09 crc kubenswrapper[4959]: I0121 13:33:09.979234 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6bef81-a325-412f-b2c5-80d6b904abd3-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:10 crc kubenswrapper[4959]: I0121 13:33:10.085608 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-cp8q7"] Jan 21 13:33:10 crc kubenswrapper[4959]: I0121 13:33:10.094232 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-cp8q7"] Jan 21 13:33:10 crc kubenswrapper[4959]: I0121 13:33:10.268484 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:10 crc kubenswrapper[4959]: I0121 13:33:10.753332 4959 generic.go:334] "Generic (PLEG): container finished" podID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerID="ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f" exitCode=0 Jan 21 13:33:10 crc kubenswrapper[4959]: I0121 13:33:10.753412 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqq62" event={"ID":"912ae55e-5977-41bc-8e7b-9e0fee202d9e","Type":"ContainerDied","Data":"ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f"} Jan 21 13:33:11 crc kubenswrapper[4959]: I0121 13:33:11.296741 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" path="/var/lib/kubelet/pods/fa6bef81-a325-412f-b2c5-80d6b904abd3/volumes" Jan 21 13:33:13 crc kubenswrapper[4959]: I0121 13:33:13.202070 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 13:33:13 crc kubenswrapper[4959]: I0121 13:33:13.818399 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerStarted","Data":"1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9"} Jan 21 13:33:14 crc kubenswrapper[4959]: I0121 13:33:14.870564 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqq62" event={"ID":"912ae55e-5977-41bc-8e7b-9e0fee202d9e","Type":"ContainerStarted","Data":"438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82"} Jan 21 13:33:14 crc kubenswrapper[4959]: I0121 13:33:14.897151 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.725575458 podStartE2EDuration="9.89712843s" podCreationTimestamp="2026-01-21 13:33:05 +0000 UTC" firstStartedPulling="2026-01-21 13:33:06.562394808 +0000 UTC m=+1447.525425351" lastFinishedPulling="2026-01-21 13:33:11.73394778 +0000 UTC m=+1452.696978323" observedRunningTime="2026-01-21 13:33:14.892902501 +0000 UTC m=+1455.855933064" watchObservedRunningTime="2026-01-21 13:33:14.89712843 +0000 UTC m=+1455.860158963" Jan 21 13:33:14 crc kubenswrapper[4959]: I0121 13:33:14.924015 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqq62" podStartSLOduration=3.568533335 podStartE2EDuration="8.923928894s" podCreationTimestamp="2026-01-21 13:33:06 +0000 UTC" firstStartedPulling="2026-01-21 13:33:08.714551008 +0000 UTC m=+1449.677581551" lastFinishedPulling="2026-01-21 13:33:14.069946577 +0000 UTC m=+1455.032977110" observedRunningTime="2026-01-21 13:33:14.922560726 +0000 UTC m=+1455.885591269" watchObservedRunningTime="2026-01-21 13:33:14.923928894 +0000 UTC m=+1455.886959437" Jan 21 13:33:15 crc kubenswrapper[4959]: I0121 13:33:15.267812 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:15 crc kubenswrapper[4959]: I0121 13:33:15.296495 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:15 crc kubenswrapper[4959]: I0121 13:33:15.894400 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.025708 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.026066 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.116446 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mnsbv"] Jan 21 13:33:16 crc kubenswrapper[4959]: E0121 13:33:16.116848 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerName="dnsmasq-dns" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.116868 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerName="dnsmasq-dns" Jan 21 13:33:16 crc kubenswrapper[4959]: E0121 13:33:16.116886 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerName="init" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.116892 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerName="init" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.117070 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6bef81-a325-412f-b2c5-80d6b904abd3" containerName="dnsmasq-dns" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.117676 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.119661 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.119873 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.127824 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mnsbv"] Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.197304 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-scripts\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.197688 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.197898 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-config-data\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.198061 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m786z\" (UniqueName: \"kubernetes.io/projected/46a209d5-9290-4c05-bff7-afeb8173fac5-kube-api-access-m786z\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.299966 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.300086 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-config-data\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.300156 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m786z\" (UniqueName: \"kubernetes.io/projected/46a209d5-9290-4c05-bff7-afeb8173fac5-kube-api-access-m786z\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.300237 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-scripts\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.308320 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.308515 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-config-data\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.310614 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-scripts\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.320812 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m786z\" (UniqueName: \"kubernetes.io/projected/46a209d5-9290-4c05-bff7-afeb8173fac5-kube-api-access-m786z\") pod \"nova-cell1-cell-mapping-mnsbv\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.437618 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:16 crc kubenswrapper[4959]: I0121 13:33:16.874267 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mnsbv"] Jan 21 13:33:16 crc kubenswrapper[4959]: W0121 13:33:16.882004 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46a209d5_9290_4c05_bff7_afeb8173fac5.slice/crio-b712f71e9fa4eefdb7a98e26e41d5b482c747956de1f68cd158d0ba653bd2c2c WatchSource:0}: Error finding container b712f71e9fa4eefdb7a98e26e41d5b482c747956de1f68cd158d0ba653bd2c2c: Status 404 returned error can't find the container with id b712f71e9fa4eefdb7a98e26e41d5b482c747956de1f68cd158d0ba653bd2c2c Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.041258 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.041259 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.116559 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.116595 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.907944 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mnsbv" event={"ID":"46a209d5-9290-4c05-bff7-afeb8173fac5","Type":"ContainerStarted","Data":"5c953a7530362dfff0a24dd3641fc5ce8dc9b691362b68f13d741d7066acfccd"} Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.908504 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mnsbv" event={"ID":"46a209d5-9290-4c05-bff7-afeb8173fac5","Type":"ContainerStarted","Data":"b712f71e9fa4eefdb7a98e26e41d5b482c747956de1f68cd158d0ba653bd2c2c"} Jan 21 13:33:17 crc kubenswrapper[4959]: I0121 13:33:17.925594 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mnsbv" podStartSLOduration=1.9255728749999999 podStartE2EDuration="1.925572875s" podCreationTimestamp="2026-01-21 13:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:17.924803624 +0000 UTC m=+1458.887834167" watchObservedRunningTime="2026-01-21 13:33:17.925572875 +0000 UTC m=+1458.888603408" Jan 21 13:33:18 crc kubenswrapper[4959]: I0121 13:33:18.165130 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqq62" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="registry-server" probeResult="failure" output=< Jan 21 13:33:18 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:33:18 crc kubenswrapper[4959]: > Jan 21 13:33:18 crc kubenswrapper[4959]: I0121 13:33:18.202454 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 13:33:18 crc kubenswrapper[4959]: I0121 13:33:18.227644 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 13:33:18 crc kubenswrapper[4959]: I0121 13:33:18.954912 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 13:33:22 crc kubenswrapper[4959]: I0121 13:33:22.950630 4959 generic.go:334] "Generic (PLEG): container finished" podID="46a209d5-9290-4c05-bff7-afeb8173fac5" containerID="5c953a7530362dfff0a24dd3641fc5ce8dc9b691362b68f13d741d7066acfccd" exitCode=0 Jan 21 13:33:22 crc kubenswrapper[4959]: I0121 13:33:22.950661 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mnsbv" event={"ID":"46a209d5-9290-4c05-bff7-afeb8173fac5","Type":"ContainerDied","Data":"5c953a7530362dfff0a24dd3641fc5ce8dc9b691362b68f13d741d7066acfccd"} Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.319221 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.444846 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-config-data\") pod \"46a209d5-9290-4c05-bff7-afeb8173fac5\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.445069 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m786z\" (UniqueName: \"kubernetes.io/projected/46a209d5-9290-4c05-bff7-afeb8173fac5-kube-api-access-m786z\") pod \"46a209d5-9290-4c05-bff7-afeb8173fac5\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.445121 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-combined-ca-bundle\") pod \"46a209d5-9290-4c05-bff7-afeb8173fac5\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.445172 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-scripts\") pod \"46a209d5-9290-4c05-bff7-afeb8173fac5\" (UID: \"46a209d5-9290-4c05-bff7-afeb8173fac5\") " Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.450826 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a209d5-9290-4c05-bff7-afeb8173fac5-kube-api-access-m786z" (OuterVolumeSpecName: "kube-api-access-m786z") pod "46a209d5-9290-4c05-bff7-afeb8173fac5" (UID: "46a209d5-9290-4c05-bff7-afeb8173fac5"). InnerVolumeSpecName "kube-api-access-m786z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.451551 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-scripts" (OuterVolumeSpecName: "scripts") pod "46a209d5-9290-4c05-bff7-afeb8173fac5" (UID: "46a209d5-9290-4c05-bff7-afeb8173fac5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.471475 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-config-data" (OuterVolumeSpecName: "config-data") pod "46a209d5-9290-4c05-bff7-afeb8173fac5" (UID: "46a209d5-9290-4c05-bff7-afeb8173fac5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.472981 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46a209d5-9290-4c05-bff7-afeb8173fac5" (UID: "46a209d5-9290-4c05-bff7-afeb8173fac5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.547780 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m786z\" (UniqueName: \"kubernetes.io/projected/46a209d5-9290-4c05-bff7-afeb8173fac5-kube-api-access-m786z\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.547819 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.547832 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.547842 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a209d5-9290-4c05-bff7-afeb8173fac5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.983036 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mnsbv" event={"ID":"46a209d5-9290-4c05-bff7-afeb8173fac5","Type":"ContainerDied","Data":"b712f71e9fa4eefdb7a98e26e41d5b482c747956de1f68cd158d0ba653bd2c2c"} Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.983334 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b712f71e9fa4eefdb7a98e26e41d5b482c747956de1f68cd158d0ba653bd2c2c" Jan 21 13:33:24 crc kubenswrapper[4959]: I0121 13:33:24.983466 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mnsbv" Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.157205 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.157489 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-log" containerID="cri-o://1464c55e7758d240870b3237aeeeaa893c83ac174058f5684f804aa5eec70103" gracePeriod=30 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.157570 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-api" containerID="cri-o://8045fbc89f8d0c74657ae512b7aa9457088fc5034b49e87acf4ba1ea5b5a9de0" gracePeriod=30 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.168412 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.168657 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ecad0604-dd95-4e22-80ec-fdb69f194e13" containerName="nova-scheduler-scheduler" containerID="cri-o://cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" gracePeriod=30 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.200884 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.201343 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-metadata" containerID="cri-o://0dc8a031e7c20f51405359cee25acbe3ad1c1c0ba777f75e84f09346f999d826" gracePeriod=30 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.201174 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-log" containerID="cri-o://79bfde0ad493ab04463861b02a745efa0ca5ed2363b0ca6c87888302ae699d72" gracePeriod=30 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.994614 4959 generic.go:334] "Generic (PLEG): container finished" podID="bf899835-6388-4051-a969-141f848c1e47" containerID="1464c55e7758d240870b3237aeeeaa893c83ac174058f5684f804aa5eec70103" exitCode=143 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.994695 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf899835-6388-4051-a969-141f848c1e47","Type":"ContainerDied","Data":"1464c55e7758d240870b3237aeeeaa893c83ac174058f5684f804aa5eec70103"} Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.997339 4959 generic.go:334] "Generic (PLEG): container finished" podID="38325378-8a06-4b6f-8de3-41705b371331" containerID="79bfde0ad493ab04463861b02a745efa0ca5ed2363b0ca6c87888302ae699d72" exitCode=143 Jan 21 13:33:25 crc kubenswrapper[4959]: I0121 13:33:25.997371 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38325378-8a06-4b6f-8de3-41705b371331","Type":"ContainerDied","Data":"79bfde0ad493ab04463861b02a745efa0ca5ed2363b0ca6c87888302ae699d72"} Jan 21 13:33:28 crc kubenswrapper[4959]: I0121 13:33:28.162467 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqq62" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="registry-server" probeResult="failure" output=< Jan 21 13:33:28 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:33:28 crc kubenswrapper[4959]: > Jan 21 13:33:28 crc kubenswrapper[4959]: E0121 13:33:28.202108 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e is running failed: container process not found" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 13:33:28 crc kubenswrapper[4959]: E0121 13:33:28.202885 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e is running failed: container process not found" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 13:33:28 crc kubenswrapper[4959]: E0121 13:33:28.203198 4959 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e is running failed: container process not found" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 13:33:28 crc kubenswrapper[4959]: E0121 13:33:28.203519 4959 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ecad0604-dd95-4e22-80ec-fdb69f194e13" containerName="nova-scheduler-scheduler" Jan 21 13:33:28 crc kubenswrapper[4959]: I0121 13:33:28.994979 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": dial tcp 10.217.0.179:8775: connect: connection refused" Jan 21 13:33:28 crc kubenswrapper[4959]: I0121 13:33:28.995695 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": dial tcp 10.217.0.179:8775: connect: connection refused" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.007055 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.035998 4959 generic.go:334] "Generic (PLEG): container finished" podID="bf899835-6388-4051-a969-141f848c1e47" containerID="8045fbc89f8d0c74657ae512b7aa9457088fc5034b49e87acf4ba1ea5b5a9de0" exitCode=0 Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.036065 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf899835-6388-4051-a969-141f848c1e47","Type":"ContainerDied","Data":"8045fbc89f8d0c74657ae512b7aa9457088fc5034b49e87acf4ba1ea5b5a9de0"} Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.037619 4959 generic.go:334] "Generic (PLEG): container finished" podID="ecad0604-dd95-4e22-80ec-fdb69f194e13" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" exitCode=0 Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.037665 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecad0604-dd95-4e22-80ec-fdb69f194e13","Type":"ContainerDied","Data":"cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e"} Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.037682 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ecad0604-dd95-4e22-80ec-fdb69f194e13","Type":"ContainerDied","Data":"07ca54f446c529fcdedfc67f89041e912e0090f39d73cf0e22dd486400aecbaf"} Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.037697 4959 scope.go:117] "RemoveContainer" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.037797 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.040660 4959 generic.go:334] "Generic (PLEG): container finished" podID="38325378-8a06-4b6f-8de3-41705b371331" containerID="0dc8a031e7c20f51405359cee25acbe3ad1c1c0ba777f75e84f09346f999d826" exitCode=0 Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.040707 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38325378-8a06-4b6f-8de3-41705b371331","Type":"ContainerDied","Data":"0dc8a031e7c20f51405359cee25acbe3ad1c1c0ba777f75e84f09346f999d826"} Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.063364 4959 scope.go:117] "RemoveContainer" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" Jan 21 13:33:29 crc kubenswrapper[4959]: E0121 13:33:29.073816 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e\": container with ID starting with cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e not found: ID does not exist" containerID="cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.074193 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e"} err="failed to get container status \"cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e\": rpc error: code = NotFound desc = could not find container \"cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e\": container with ID starting with cb6a66e107631fb578e2cf68af9b8b897eba2089d32016febff96dfae53ef77e not found: ID does not exist" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.133085 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-combined-ca-bundle\") pod \"ecad0604-dd95-4e22-80ec-fdb69f194e13\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.133590 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-config-data\") pod \"ecad0604-dd95-4e22-80ec-fdb69f194e13\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.133746 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjhr9\" (UniqueName: \"kubernetes.io/projected/ecad0604-dd95-4e22-80ec-fdb69f194e13-kube-api-access-qjhr9\") pod \"ecad0604-dd95-4e22-80ec-fdb69f194e13\" (UID: \"ecad0604-dd95-4e22-80ec-fdb69f194e13\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.165732 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecad0604-dd95-4e22-80ec-fdb69f194e13-kube-api-access-qjhr9" (OuterVolumeSpecName: "kube-api-access-qjhr9") pod "ecad0604-dd95-4e22-80ec-fdb69f194e13" (UID: "ecad0604-dd95-4e22-80ec-fdb69f194e13"). InnerVolumeSpecName "kube-api-access-qjhr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.170360 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-config-data" (OuterVolumeSpecName: "config-data") pod "ecad0604-dd95-4e22-80ec-fdb69f194e13" (UID: "ecad0604-dd95-4e22-80ec-fdb69f194e13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.174210 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecad0604-dd95-4e22-80ec-fdb69f194e13" (UID: "ecad0604-dd95-4e22-80ec-fdb69f194e13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.236434 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.236463 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecad0604-dd95-4e22-80ec-fdb69f194e13-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.236473 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjhr9\" (UniqueName: \"kubernetes.io/projected/ecad0604-dd95-4e22-80ec-fdb69f194e13-kube-api-access-qjhr9\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.378153 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.403168 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.411991 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:29 crc kubenswrapper[4959]: E0121 13:33:29.412249 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a209d5-9290-4c05-bff7-afeb8173fac5" containerName="nova-manage" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.412261 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a209d5-9290-4c05-bff7-afeb8173fac5" containerName="nova-manage" Jan 21 13:33:29 crc kubenswrapper[4959]: E0121 13:33:29.412286 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecad0604-dd95-4e22-80ec-fdb69f194e13" containerName="nova-scheduler-scheduler" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.412292 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecad0604-dd95-4e22-80ec-fdb69f194e13" containerName="nova-scheduler-scheduler" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.412465 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecad0604-dd95-4e22-80ec-fdb69f194e13" containerName="nova-scheduler-scheduler" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.412488 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a209d5-9290-4c05-bff7-afeb8173fac5" containerName="nova-manage" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.412984 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.414365 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.422989 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.441793 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qszs\" (UniqueName: \"kubernetes.io/projected/c1c97e4a-5c7a-435a-936d-37db58539c69-kube-api-access-9qszs\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.441864 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c97e4a-5c7a-435a-936d-37db58539c69-config-data\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.441917 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c97e4a-5c7a-435a-936d-37db58539c69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.513915 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545420 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-internal-tls-certs\") pod \"bf899835-6388-4051-a969-141f848c1e47\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545474 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-config-data\") pod \"bf899835-6388-4051-a969-141f848c1e47\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545549 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-combined-ca-bundle\") pod \"bf899835-6388-4051-a969-141f848c1e47\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545586 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-public-tls-certs\") pod \"bf899835-6388-4051-a969-141f848c1e47\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545605 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9km9f\" (UniqueName: \"kubernetes.io/projected/bf899835-6388-4051-a969-141f848c1e47-kube-api-access-9km9f\") pod \"bf899835-6388-4051-a969-141f848c1e47\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545681 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf899835-6388-4051-a969-141f848c1e47-logs\") pod \"bf899835-6388-4051-a969-141f848c1e47\" (UID: \"bf899835-6388-4051-a969-141f848c1e47\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.545983 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qszs\" (UniqueName: \"kubernetes.io/projected/c1c97e4a-5c7a-435a-936d-37db58539c69-kube-api-access-9qszs\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.546027 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c97e4a-5c7a-435a-936d-37db58539c69-config-data\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.546073 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c97e4a-5c7a-435a-936d-37db58539c69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.550053 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf899835-6388-4051-a969-141f848c1e47-logs" (OuterVolumeSpecName: "logs") pod "bf899835-6388-4051-a969-141f848c1e47" (UID: "bf899835-6388-4051-a969-141f848c1e47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.551888 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.553363 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf899835-6388-4051-a969-141f848c1e47-kube-api-access-9km9f" (OuterVolumeSpecName: "kube-api-access-9km9f") pod "bf899835-6388-4051-a969-141f848c1e47" (UID: "bf899835-6388-4051-a969-141f848c1e47"). InnerVolumeSpecName "kube-api-access-9km9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.553911 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c97e4a-5c7a-435a-936d-37db58539c69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.554485 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c97e4a-5c7a-435a-936d-37db58539c69-config-data\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.576167 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qszs\" (UniqueName: \"kubernetes.io/projected/c1c97e4a-5c7a-435a-936d-37db58539c69-kube-api-access-9qszs\") pod \"nova-scheduler-0\" (UID: \"c1c97e4a-5c7a-435a-936d-37db58539c69\") " pod="openstack/nova-scheduler-0" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.590766 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-config-data" (OuterVolumeSpecName: "config-data") pod "bf899835-6388-4051-a969-141f848c1e47" (UID: "bf899835-6388-4051-a969-141f848c1e47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.600327 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf899835-6388-4051-a969-141f848c1e47" (UID: "bf899835-6388-4051-a969-141f848c1e47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.609074 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf899835-6388-4051-a969-141f848c1e47" (UID: "bf899835-6388-4051-a969-141f848c1e47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.622813 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf899835-6388-4051-a969-141f848c1e47" (UID: "bf899835-6388-4051-a969-141f848c1e47"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.647843 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-combined-ca-bundle\") pod \"38325378-8a06-4b6f-8de3-41705b371331\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.647904 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-config-data\") pod \"38325378-8a06-4b6f-8de3-41705b371331\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.647979 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-nova-metadata-tls-certs\") pod \"38325378-8a06-4b6f-8de3-41705b371331\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648078 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwrtw\" (UniqueName: \"kubernetes.io/projected/38325378-8a06-4b6f-8de3-41705b371331-kube-api-access-dwrtw\") pod \"38325378-8a06-4b6f-8de3-41705b371331\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648143 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38325378-8a06-4b6f-8de3-41705b371331-logs\") pod \"38325378-8a06-4b6f-8de3-41705b371331\" (UID: \"38325378-8a06-4b6f-8de3-41705b371331\") " Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648570 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf899835-6388-4051-a969-141f848c1e47-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648589 4959 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648599 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648608 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648616 4959 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf899835-6388-4051-a969-141f848c1e47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.648625 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9km9f\" (UniqueName: \"kubernetes.io/projected/bf899835-6388-4051-a969-141f848c1e47-kube-api-access-9km9f\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.649252 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38325378-8a06-4b6f-8de3-41705b371331-logs" (OuterVolumeSpecName: "logs") pod "38325378-8a06-4b6f-8de3-41705b371331" (UID: "38325378-8a06-4b6f-8de3-41705b371331"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.652409 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38325378-8a06-4b6f-8de3-41705b371331-kube-api-access-dwrtw" (OuterVolumeSpecName: "kube-api-access-dwrtw") pod "38325378-8a06-4b6f-8de3-41705b371331" (UID: "38325378-8a06-4b6f-8de3-41705b371331"). InnerVolumeSpecName "kube-api-access-dwrtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.679523 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38325378-8a06-4b6f-8de3-41705b371331" (UID: "38325378-8a06-4b6f-8de3-41705b371331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.679898 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-config-data" (OuterVolumeSpecName: "config-data") pod "38325378-8a06-4b6f-8de3-41705b371331" (UID: "38325378-8a06-4b6f-8de3-41705b371331"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.697937 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "38325378-8a06-4b6f-8de3-41705b371331" (UID: "38325378-8a06-4b6f-8de3-41705b371331"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.749988 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwrtw\" (UniqueName: \"kubernetes.io/projected/38325378-8a06-4b6f-8de3-41705b371331-kube-api-access-dwrtw\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.750033 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38325378-8a06-4b6f-8de3-41705b371331-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.750049 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.750062 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.750076 4959 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38325378-8a06-4b6f-8de3-41705b371331-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:29 crc kubenswrapper[4959]: I0121 13:33:29.827132 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.054980 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.054995 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bf899835-6388-4051-a969-141f848c1e47","Type":"ContainerDied","Data":"6e516a7763cc914776aa6d5a4d41882c8e246774a6e9e9f896bd763722ce8812"} Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.055047 4959 scope.go:117] "RemoveContainer" containerID="8045fbc89f8d0c74657ae512b7aa9457088fc5034b49e87acf4ba1ea5b5a9de0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.062713 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38325378-8a06-4b6f-8de3-41705b371331","Type":"ContainerDied","Data":"3bed7d6ae05d12f2bc91626f661129a8310771e7b723595eeb48dc1686896c03"} Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.062805 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.087378 4959 scope.go:117] "RemoveContainer" containerID="1464c55e7758d240870b3237aeeeaa893c83ac174058f5684f804aa5eec70103" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.106367 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.116026 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.132411 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.133457 4959 scope.go:117] "RemoveContainer" containerID="0dc8a031e7c20f51405359cee25acbe3ad1c1c0ba777f75e84f09346f999d826" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.151670 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.181207 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: E0121 13:33:30.182068 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-metadata" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182088 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-metadata" Jan 21 13:33:30 crc kubenswrapper[4959]: E0121 13:33:30.182148 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-api" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182159 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-api" Jan 21 13:33:30 crc kubenswrapper[4959]: E0121 13:33:30.182177 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-log" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182184 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-log" Jan 21 13:33:30 crc kubenswrapper[4959]: E0121 13:33:30.182218 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-log" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182228 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-log" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182902 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-log" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182928 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-api" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182956 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="38325378-8a06-4b6f-8de3-41705b371331" containerName="nova-metadata-metadata" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.182998 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf899835-6388-4051-a969-141f848c1e47" containerName="nova-api-log" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.192695 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.195046 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.199131 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.201133 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.223649 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.230332 4959 scope.go:117] "RemoveContainer" containerID="79bfde0ad493ab04463861b02a745efa0ca5ed2363b0ca6c87888302ae699d72" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.234215 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.237768 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.239693 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.244506 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.246345 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278018 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-logs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278132 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278219 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-config-data\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278279 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfj4\" (UniqueName: \"kubernetes.io/projected/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-kube-api-access-vgfj4\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278354 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278381 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.278760 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.381108 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-config-data\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.381168 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-config-data\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.381288 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.381327 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgfj4\" (UniqueName: \"kubernetes.io/projected/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-kube-api-access-vgfj4\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.381379 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4br2\" (UniqueName: \"kubernetes.io/projected/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-kube-api-access-n4br2\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.382190 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.382221 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.382321 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-logs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.382356 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.382606 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-logs\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.382649 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.383450 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-logs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.385645 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.385908 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-config-data\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.386792 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.388053 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.402687 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgfj4\" (UniqueName: \"kubernetes.io/projected/49b276b1-4fc7-4b32-88ba-2797fb6dcf0d-kube-api-access-vgfj4\") pod \"nova-api-0\" (UID: \"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d\") " pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.485537 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-logs\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.485606 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.485664 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-config-data\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.485726 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.485758 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4br2\" (UniqueName: \"kubernetes.io/projected/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-kube-api-access-n4br2\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.486198 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-logs\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.489048 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-config-data\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.490579 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.491070 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.502646 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4br2\" (UniqueName: \"kubernetes.io/projected/d1a88128-2d1e-45a7-b5b2-3e9dd073611e-kube-api-access-n4br2\") pod \"nova-metadata-0\" (UID: \"d1a88128-2d1e-45a7-b5b2-3e9dd073611e\") " pod="openstack/nova-metadata-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.533378 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:33:30 crc kubenswrapper[4959]: I0121 13:33:30.566883 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.049745 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:33:31 crc kubenswrapper[4959]: W0121 13:33:31.052719 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49b276b1_4fc7_4b32_88ba_2797fb6dcf0d.slice/crio-b77e3b83a756547cb02091da69c1bb08f3ebd9890cb254244ae3567fb199b9ad WatchSource:0}: Error finding container b77e3b83a756547cb02091da69c1bb08f3ebd9890cb254244ae3567fb199b9ad: Status 404 returned error can't find the container with id b77e3b83a756547cb02091da69c1bb08f3ebd9890cb254244ae3567fb199b9ad Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.075058 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d","Type":"ContainerStarted","Data":"b77e3b83a756547cb02091da69c1bb08f3ebd9890cb254244ae3567fb199b9ad"} Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.076839 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c1c97e4a-5c7a-435a-936d-37db58539c69","Type":"ContainerStarted","Data":"aba982e2e4f94a1e23616b05b3028641953e1746d396d54682e50ee1c3be5333"} Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.076888 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c1c97e4a-5c7a-435a-936d-37db58539c69","Type":"ContainerStarted","Data":"117daca104d118c4a06b692a6eccfd349593a27a7d2d7e199a25a40049149cce"} Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.097872 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.097856915 podStartE2EDuration="2.097856915s" podCreationTimestamp="2026-01-21 13:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:31.093948735 +0000 UTC m=+1472.056979278" watchObservedRunningTime="2026-01-21 13:33:31.097856915 +0000 UTC m=+1472.060887458" Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.203690 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.305909 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38325378-8a06-4b6f-8de3-41705b371331" path="/var/lib/kubelet/pods/38325378-8a06-4b6f-8de3-41705b371331/volumes" Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.307127 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf899835-6388-4051-a969-141f848c1e47" path="/var/lib/kubelet/pods/bf899835-6388-4051-a969-141f848c1e47/volumes" Jan 21 13:33:31 crc kubenswrapper[4959]: I0121 13:33:31.307895 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecad0604-dd95-4e22-80ec-fdb69f194e13" path="/var/lib/kubelet/pods/ecad0604-dd95-4e22-80ec-fdb69f194e13/volumes" Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.087650 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d","Type":"ContainerStarted","Data":"7cb375fd4c8f933166f260b8c0aaf1b9109f628f2195e1ed3d9b11256d93c9be"} Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.087983 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49b276b1-4fc7-4b32-88ba-2797fb6dcf0d","Type":"ContainerStarted","Data":"c34b957e51c855febe01015caeb327f02c3c6bda51c98f294c3850e6e8cc00c0"} Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.089434 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1a88128-2d1e-45a7-b5b2-3e9dd073611e","Type":"ContainerStarted","Data":"ca1d84f84029702b86e88b720d63533a4a30c167d10b735d3585125870861359"} Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.089462 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1a88128-2d1e-45a7-b5b2-3e9dd073611e","Type":"ContainerStarted","Data":"121587f20f56d9f4998565288217f74e528aa5d170b9a694f6e6ced2ec3e7b95"} Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.089473 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1a88128-2d1e-45a7-b5b2-3e9dd073611e","Type":"ContainerStarted","Data":"02c7b17bd328d893f562f013d82d10cb40702bdafed6c7b75b4743cc758facea"} Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.116007 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.115985754 podStartE2EDuration="2.115985754s" podCreationTimestamp="2026-01-21 13:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:32.113556815 +0000 UTC m=+1473.076587358" watchObservedRunningTime="2026-01-21 13:33:32.115985754 +0000 UTC m=+1473.079016297" Jan 21 13:33:32 crc kubenswrapper[4959]: I0121 13:33:32.131692 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.131677556 podStartE2EDuration="2.131677556s" podCreationTimestamp="2026-01-21 13:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:33:32.131085349 +0000 UTC m=+1473.094115892" watchObservedRunningTime="2026-01-21 13:33:32.131677556 +0000 UTC m=+1473.094708099" Jan 21 13:33:34 crc kubenswrapper[4959]: I0121 13:33:34.827751 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 13:33:35 crc kubenswrapper[4959]: I0121 13:33:35.567826 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:33:35 crc kubenswrapper[4959]: I0121 13:33:35.568288 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:33:36 crc kubenswrapper[4959]: I0121 13:33:36.000896 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:33:36 crc kubenswrapper[4959]: I0121 13:33:36.009216 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 13:33:37 crc kubenswrapper[4959]: I0121 13:33:37.164312 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:37 crc kubenswrapper[4959]: I0121 13:33:37.215397 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:38 crc kubenswrapper[4959]: I0121 13:33:38.147839 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqq62"] Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.160030 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqq62" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="registry-server" containerID="cri-o://438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82" gracePeriod=2 Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.590064 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.778399 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjrbn\" (UniqueName: \"kubernetes.io/projected/912ae55e-5977-41bc-8e7b-9e0fee202d9e-kube-api-access-hjrbn\") pod \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.778458 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-utilities\") pod \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.778698 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-catalog-content\") pod \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\" (UID: \"912ae55e-5977-41bc-8e7b-9e0fee202d9e\") " Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.781147 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-utilities" (OuterVolumeSpecName: "utilities") pod "912ae55e-5977-41bc-8e7b-9e0fee202d9e" (UID: "912ae55e-5977-41bc-8e7b-9e0fee202d9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.802299 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912ae55e-5977-41bc-8e7b-9e0fee202d9e-kube-api-access-hjrbn" (OuterVolumeSpecName: "kube-api-access-hjrbn") pod "912ae55e-5977-41bc-8e7b-9e0fee202d9e" (UID: "912ae55e-5977-41bc-8e7b-9e0fee202d9e"). InnerVolumeSpecName "kube-api-access-hjrbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.841350 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.870815 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.880180 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjrbn\" (UniqueName: \"kubernetes.io/projected/912ae55e-5977-41bc-8e7b-9e0fee202d9e-kube-api-access-hjrbn\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.880209 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.933705 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "912ae55e-5977-41bc-8e7b-9e0fee202d9e" (UID: "912ae55e-5977-41bc-8e7b-9e0fee202d9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:33:39 crc kubenswrapper[4959]: I0121 13:33:39.983110 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/912ae55e-5977-41bc-8e7b-9e0fee202d9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.170608 4959 generic.go:334] "Generic (PLEG): container finished" podID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerID="438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82" exitCode=0 Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.170663 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqq62" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.170708 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqq62" event={"ID":"912ae55e-5977-41bc-8e7b-9e0fee202d9e","Type":"ContainerDied","Data":"438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82"} Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.170739 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqq62" event={"ID":"912ae55e-5977-41bc-8e7b-9e0fee202d9e","Type":"ContainerDied","Data":"4b62dea88aa08b934f0b8735142edbce7c72124cbb1373eca4966ce8b122273a"} Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.170758 4959 scope.go:117] "RemoveContainer" containerID="438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.192917 4959 scope.go:117] "RemoveContainer" containerID="ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.210822 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.212505 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqq62"] Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.220417 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqq62"] Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.233363 4959 scope.go:117] "RemoveContainer" containerID="c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.271529 4959 scope.go:117] "RemoveContainer" containerID="438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82" Jan 21 13:33:40 crc kubenswrapper[4959]: E0121 13:33:40.272327 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82\": container with ID starting with 438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82 not found: ID does not exist" containerID="438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.272357 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82"} err="failed to get container status \"438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82\": rpc error: code = NotFound desc = could not find container \"438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82\": container with ID starting with 438771da43a6bfcd6906b880f036aa221f0f0305cd9d484e2502c18f63f30e82 not found: ID does not exist" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.272384 4959 scope.go:117] "RemoveContainer" containerID="ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f" Jan 21 13:33:40 crc kubenswrapper[4959]: E0121 13:33:40.272676 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f\": container with ID starting with ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f not found: ID does not exist" containerID="ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.272699 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f"} err="failed to get container status \"ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f\": rpc error: code = NotFound desc = could not find container \"ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f\": container with ID starting with ff313278b586bb73825c9d05f6a66e6ff5ce4eade5bc8adb0220f818e93c728f not found: ID does not exist" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.272714 4959 scope.go:117] "RemoveContainer" containerID="c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac" Jan 21 13:33:40 crc kubenswrapper[4959]: E0121 13:33:40.272906 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac\": container with ID starting with c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac not found: ID does not exist" containerID="c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.272926 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac"} err="failed to get container status \"c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac\": rpc error: code = NotFound desc = could not find container \"c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac\": container with ID starting with c3521e8beb6830d39e3bfc77a77168510e21737e3adb9bf7f4dbc161f217bbac not found: ID does not exist" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.534686 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.534735 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.568289 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:33:40 crc kubenswrapper[4959]: I0121 13:33:40.568363 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:33:41 crc kubenswrapper[4959]: I0121 13:33:41.302361 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" path="/var/lib/kubelet/pods/912ae55e-5977-41bc-8e7b-9e0fee202d9e/volumes" Jan 21 13:33:41 crc kubenswrapper[4959]: I0121 13:33:41.548279 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="49b276b1-4fc7-4b32-88ba-2797fb6dcf0d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:33:41 crc kubenswrapper[4959]: I0121 13:33:41.548305 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="49b276b1-4fc7-4b32-88ba-2797fb6dcf0d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:33:41 crc kubenswrapper[4959]: I0121 13:33:41.599355 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d1a88128-2d1e-45a7-b5b2-3e9dd073611e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:33:41 crc kubenswrapper[4959]: I0121 13:33:41.599369 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d1a88128-2d1e-45a7-b5b2-3e9dd073611e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.541345 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.542138 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.542386 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.542563 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.548400 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.549360 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.574654 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.575432 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:33:50 crc kubenswrapper[4959]: I0121 13:33:50.587628 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:33:51 crc kubenswrapper[4959]: I0121 13:33:51.275337 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:33:51 crc kubenswrapper[4959]: I0121 13:33:51.379568 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:33:51 crc kubenswrapper[4959]: I0121 13:33:51.379643 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:33:59 crc kubenswrapper[4959]: I0121 13:33:59.898773 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:34:00 crc kubenswrapper[4959]: I0121 13:34:00.858896 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:34:03 crc kubenswrapper[4959]: I0121 13:34:03.948486 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="rabbitmq" containerID="cri-o://48154a5bea0cee803d85fa08d60fae7a604f5f44a50c741949174835b02f3dac" gracePeriod=604796 Jan 21 13:34:05 crc kubenswrapper[4959]: I0121 13:34:05.298443 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="rabbitmq" containerID="cri-o://01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744" gracePeriod=604796 Jan 21 13:34:08 crc kubenswrapper[4959]: I0121 13:34:08.414693 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 21 13:34:08 crc kubenswrapper[4959]: I0121 13:34:08.424921 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.439729 4959 generic.go:334] "Generic (PLEG): container finished" podID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerID="48154a5bea0cee803d85fa08d60fae7a604f5f44a50c741949174835b02f3dac" exitCode=0 Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.439818 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2","Type":"ContainerDied","Data":"48154a5bea0cee803d85fa08d60fae7a604f5f44a50c741949174835b02f3dac"} Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.527393 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.576754 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-erlang-cookie\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577066 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-tls\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577177 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-pod-info\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577256 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zdb7\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-kube-api-access-8zdb7\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577368 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-confd\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577706 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-plugins\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577836 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-config-data\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.577958 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-erlang-cookie-secret\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.578433 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-server-conf\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.578521 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-plugins-conf\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.578679 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\" (UID: \"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2\") " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.579288 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.579378 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.590937 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.594234 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.596062 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-pod-info" (OuterVolumeSpecName: "pod-info") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.603453 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-kube-api-access-8zdb7" (OuterVolumeSpecName: "kube-api-access-8zdb7") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "kube-api-access-8zdb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.614326 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.632787 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.650239 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-config-data" (OuterVolumeSpecName: "config-data") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682779 4959 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682816 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682828 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682838 4959 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682846 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zdb7\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-kube-api-access-8zdb7\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682856 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682864 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682874 4959 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.682881 4959 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.684461 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-server-conf" (OuterVolumeSpecName: "server-conf") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.738336 4959 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.784771 4959 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.784802 4959 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.790194 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" (UID: "3b3273a9-7ce3-48ea-9546-ecb560a2d6b2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:10 crc kubenswrapper[4959]: I0121 13:34:10.886232 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.449924 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3b3273a9-7ce3-48ea-9546-ecb560a2d6b2","Type":"ContainerDied","Data":"556474c70bc1657b5687c1bd6a12c799aa82528a920da5559f8fe194550ee83d"} Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.449972 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.450245 4959 scope.go:117] "RemoveContainer" containerID="48154a5bea0cee803d85fa08d60fae7a604f5f44a50c741949174835b02f3dac" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.570396 4959 scope.go:117] "RemoveContainer" containerID="489878be51100bbc8edd0fe92d1f85d34e280d023ad9591d04ed79f7501bbf46" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.583446 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.593954 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.634400 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:34:11 crc kubenswrapper[4959]: E0121 13:34:11.634764 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="setup-container" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.634781 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="setup-container" Jan 21 13:34:11 crc kubenswrapper[4959]: E0121 13:34:11.634795 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="extract-utilities" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.634801 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="extract-utilities" Jan 21 13:34:11 crc kubenswrapper[4959]: E0121 13:34:11.634825 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="rabbitmq" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.634832 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="rabbitmq" Jan 21 13:34:11 crc kubenswrapper[4959]: E0121 13:34:11.634843 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="registry-server" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.634849 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="registry-server" Jan 21 13:34:11 crc kubenswrapper[4959]: E0121 13:34:11.634859 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="extract-content" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.634866 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="extract-content" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.635051 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="912ae55e-5977-41bc-8e7b-9e0fee202d9e" containerName="registry-server" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.635062 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" containerName="rabbitmq" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.637253 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.640492 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.640525 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.640674 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.640954 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.640976 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.641266 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qrtdw" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.641692 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.659111 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.697860 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.697912 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.697953 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.697974 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.697993 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d94ce670-7f1f-426a-a78f-5b62cf5919cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.698032 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.698072 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b59r\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-kube-api-access-7b59r\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.698092 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.698120 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d94ce670-7f1f-426a-a78f-5b62cf5919cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.698144 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.698168 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.799706 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.799785 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.799846 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.799875 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.799895 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d94ce670-7f1f-426a-a78f-5b62cf5919cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.799956 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800020 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b59r\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-kube-api-access-7b59r\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800051 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800073 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d94ce670-7f1f-426a-a78f-5b62cf5919cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800116 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800145 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800287 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800581 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800777 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.800910 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.801040 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-config-data\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.801309 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d94ce670-7f1f-426a-a78f-5b62cf5919cf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.804542 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.806618 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d94ce670-7f1f-426a-a78f-5b62cf5919cf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.809920 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.818733 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d94ce670-7f1f-426a-a78f-5b62cf5919cf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.820389 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b59r\" (UniqueName: \"kubernetes.io/projected/d94ce670-7f1f-426a-a78f-5b62cf5919cf-kube-api-access-7b59r\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.840454 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d94ce670-7f1f-426a-a78f-5b62cf5919cf\") " pod="openstack/rabbitmq-server-0" Jan 21 13:34:11 crc kubenswrapper[4959]: I0121 13:34:11.952401 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.304535 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.306996 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f613f3-9dc0-438c-8232-190c680ab312-erlang-cookie-secret\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307053 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-erlang-cookie\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307211 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307243 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-plugins\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307288 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-server-conf\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307305 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-confd\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307334 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f613f3-9dc0-438c-8232-190c680ab312-pod-info\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307355 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-tls\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307379 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-config-data\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307410 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtn9\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-kube-api-access-6mtn9\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.307439 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-plugins-conf\") pod \"56f613f3-9dc0-438c-8232-190c680ab312\" (UID: \"56f613f3-9dc0-438c-8232-190c680ab312\") " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.309412 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.309450 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.310539 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.313626 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.313759 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.314337 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f613f3-9dc0-438c-8232-190c680ab312-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.315193 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-kube-api-access-6mtn9" (OuterVolumeSpecName: "kube-api-access-6mtn9") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "kube-api-access-6mtn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.324159 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/56f613f3-9dc0-438c-8232-190c680ab312-pod-info" (OuterVolumeSpecName: "pod-info") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.343823 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-config-data" (OuterVolumeSpecName: "config-data") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.373626 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-server-conf" (OuterVolumeSpecName: "server-conf") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.408888 4959 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56f613f3-9dc0-438c-8232-190c680ab312-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.408944 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.408957 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.408969 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mtn9\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-kube-api-access-6mtn9\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.408982 4959 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.408993 4959 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56f613f3-9dc0-438c-8232-190c680ab312-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.409004 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.409040 4959 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.409049 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.409057 4959 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56f613f3-9dc0-438c-8232-190c680ab312-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.430371 4959 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.450012 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "56f613f3-9dc0-438c-8232-190c680ab312" (UID: "56f613f3-9dc0-438c-8232-190c680ab312"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.466533 4959 generic.go:334] "Generic (PLEG): container finished" podID="56f613f3-9dc0-438c-8232-190c680ab312" containerID="01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744" exitCode=0 Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.466574 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56f613f3-9dc0-438c-8232-190c680ab312","Type":"ContainerDied","Data":"01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744"} Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.466598 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56f613f3-9dc0-438c-8232-190c680ab312","Type":"ContainerDied","Data":"f47892ae61264a84b4c54e8ad502945afacf80c6f9d88d6b7a7e6a3e8bbc90f1"} Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.466617 4959 scope.go:117] "RemoveContainer" containerID="01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.466736 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.477795 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.496296 4959 scope.go:117] "RemoveContainer" containerID="648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.510730 4959 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.510766 4959 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56f613f3-9dc0-438c-8232-190c680ab312-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.517927 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.524330 4959 scope.go:117] "RemoveContainer" containerID="01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744" Jan 21 13:34:12 crc kubenswrapper[4959]: E0121 13:34:12.525177 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744\": container with ID starting with 01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744 not found: ID does not exist" containerID="01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.525232 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744"} err="failed to get container status \"01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744\": rpc error: code = NotFound desc = could not find container \"01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744\": container with ID starting with 01b500c5b83f97426962bbb9efa32fc42009adec8a58ae09cd97f3c7f3546744 not found: ID does not exist" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.525260 4959 scope.go:117] "RemoveContainer" containerID="648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648" Jan 21 13:34:12 crc kubenswrapper[4959]: E0121 13:34:12.525664 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648\": container with ID starting with 648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648 not found: ID does not exist" containerID="648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.525693 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648"} err="failed to get container status \"648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648\": rpc error: code = NotFound desc = could not find container \"648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648\": container with ID starting with 648f0bddfa9b88415045cc0e63a3670d332d8f560d2bd7397d04e3f010074648 not found: ID does not exist" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.528286 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.549305 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:34:12 crc kubenswrapper[4959]: E0121 13:34:12.550355 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="rabbitmq" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.550458 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="rabbitmq" Jan 21 13:34:12 crc kubenswrapper[4959]: E0121 13:34:12.550543 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="setup-container" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.550604 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="setup-container" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.550943 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f613f3-9dc0-438c-8232-190c680ab312" containerName="rabbitmq" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.552237 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.558855 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.558891 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.559006 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.559068 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.559176 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.559226 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-45n44" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.559452 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.578357 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714341 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/98e47fb2-a96f-4e35-8d32-1226689833b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714746 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714780 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714813 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714840 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714861 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ztg\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-kube-api-access-b8ztg\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714942 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.714983 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.715024 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/98e47fb2-a96f-4e35-8d32-1226689833b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.715141 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.715172 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.817626 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.817754 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.817808 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ztg\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-kube-api-access-b8ztg\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.817993 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818063 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/98e47fb2-a96f-4e35-8d32-1226689833b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818115 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818171 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818211 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818217 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818267 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/98e47fb2-a96f-4e35-8d32-1226689833b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818386 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.818447 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.825979 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.827594 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.827873 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.828869 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.828980 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98e47fb2-a96f-4e35-8d32-1226689833b0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.831495 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/98e47fb2-a96f-4e35-8d32-1226689833b0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.831693 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.831780 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.832385 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/98e47fb2-a96f-4e35-8d32-1226689833b0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.846759 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ztg\" (UniqueName: \"kubernetes.io/projected/98e47fb2-a96f-4e35-8d32-1226689833b0-kube-api-access-b8ztg\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.849423 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"98e47fb2-a96f-4e35-8d32-1226689833b0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:12 crc kubenswrapper[4959]: I0121 13:34:12.883271 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:13 crc kubenswrapper[4959]: I0121 13:34:13.297749 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3273a9-7ce3-48ea-9546-ecb560a2d6b2" path="/var/lib/kubelet/pods/3b3273a9-7ce3-48ea-9546-ecb560a2d6b2/volumes" Jan 21 13:34:13 crc kubenswrapper[4959]: I0121 13:34:13.299651 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f613f3-9dc0-438c-8232-190c680ab312" path="/var/lib/kubelet/pods/56f613f3-9dc0-438c-8232-190c680ab312/volumes" Jan 21 13:34:13 crc kubenswrapper[4959]: I0121 13:34:13.322454 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:34:13 crc kubenswrapper[4959]: I0121 13:34:13.475388 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98e47fb2-a96f-4e35-8d32-1226689833b0","Type":"ContainerStarted","Data":"1cec00619963a6e288c64dc6eedf69d88390e5f2981ffc1ac05b3283f8468ca7"} Jan 21 13:34:13 crc kubenswrapper[4959]: I0121 13:34:13.479020 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d94ce670-7f1f-426a-a78f-5b62cf5919cf","Type":"ContainerStarted","Data":"baca093894dffa7ea74a15dac68cccac6ef6be4e836eb31c383377e3adbdb9ae"} Jan 21 13:34:14 crc kubenswrapper[4959]: I0121 13:34:14.490340 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d94ce670-7f1f-426a-a78f-5b62cf5919cf","Type":"ContainerStarted","Data":"04f54ea73b84954c72260f8819d6b402ae6ccd6e802dc6af7581d52fd8150063"} Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.324588 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-h2scb"] Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.325963 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.327396 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.347860 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-h2scb"] Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.480794 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-config\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.480843 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djnz\" (UniqueName: \"kubernetes.io/projected/33524250-7ea8-4691-93ce-46083025be47-kube-api-access-8djnz\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.480869 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-dns-svc\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.480886 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.481133 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.481290 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.499504 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98e47fb2-a96f-4e35-8d32-1226689833b0","Type":"ContainerStarted","Data":"34cfb1e33ea34d6895bce7d17958aed9e429fa1f3465c34aa615c635f2b989c2"} Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.582965 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.583216 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-config\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.583241 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8djnz\" (UniqueName: \"kubernetes.io/projected/33524250-7ea8-4691-93ce-46083025be47-kube-api-access-8djnz\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.583263 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-dns-svc\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.583292 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.583395 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.584012 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.584896 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.585498 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.585584 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-config\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.585725 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-dns-svc\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.605484 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8djnz\" (UniqueName: \"kubernetes.io/projected/33524250-7ea8-4691-93ce-46083025be47-kube-api-access-8djnz\") pod \"dnsmasq-dns-578b8d767c-h2scb\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:15 crc kubenswrapper[4959]: I0121 13:34:15.642085 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:16 crc kubenswrapper[4959]: I0121 13:34:16.178770 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-h2scb"] Jan 21 13:34:16 crc kubenswrapper[4959]: I0121 13:34:16.509250 4959 generic.go:334] "Generic (PLEG): container finished" podID="33524250-7ea8-4691-93ce-46083025be47" containerID="badc6e1edf949e332b8169ca68858f42831ecbbd79864a467c17a5d487b64663" exitCode=0 Jan 21 13:34:16 crc kubenswrapper[4959]: I0121 13:34:16.509316 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" event={"ID":"33524250-7ea8-4691-93ce-46083025be47","Type":"ContainerDied","Data":"badc6e1edf949e332b8169ca68858f42831ecbbd79864a467c17a5d487b64663"} Jan 21 13:34:16 crc kubenswrapper[4959]: I0121 13:34:16.509623 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" event={"ID":"33524250-7ea8-4691-93ce-46083025be47","Type":"ContainerStarted","Data":"623f154ba81c25501a45cdfea58ea7a6c0025d05e52c4b48cf819e965368ca5b"} Jan 21 13:34:17 crc kubenswrapper[4959]: I0121 13:34:17.518904 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" event={"ID":"33524250-7ea8-4691-93ce-46083025be47","Type":"ContainerStarted","Data":"dc47d6c69099aec2a9d2687d74bc8a03e851d5b42abbf94439e4ec18e892d7cd"} Jan 21 13:34:17 crc kubenswrapper[4959]: I0121 13:34:17.519411 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:17 crc kubenswrapper[4959]: I0121 13:34:17.554197 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" podStartSLOduration=2.5541801570000002 podStartE2EDuration="2.554180157s" podCreationTimestamp="2026-01-21 13:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:34:17.546071199 +0000 UTC m=+1518.509101742" watchObservedRunningTime="2026-01-21 13:34:17.554180157 +0000 UTC m=+1518.517210700" Jan 21 13:34:21 crc kubenswrapper[4959]: I0121 13:34:21.379874 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:34:21 crc kubenswrapper[4959]: I0121 13:34:21.380326 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:34:25 crc kubenswrapper[4959]: I0121 13:34:25.659297 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:25 crc kubenswrapper[4959]: I0121 13:34:25.720697 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-jc4q6"] Jan 21 13:34:25 crc kubenswrapper[4959]: I0121 13:34:25.720922 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" podUID="5688a156-f093-490c-832c-59254c10ba03" containerName="dnsmasq-dns" containerID="cri-o://ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d" gracePeriod=10 Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.016778 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-l8dd8"] Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.020376 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.034430 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-l8dd8"] Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.106063 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g299r\" (UniqueName: \"kubernetes.io/projected/4132df39-cbe0-451d-9eae-39ea25e6ce18-kube-api-access-g299r\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.106374 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.106396 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.106433 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.106468 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.106497 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-config\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.208281 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g299r\" (UniqueName: \"kubernetes.io/projected/4132df39-cbe0-451d-9eae-39ea25e6ce18-kube-api-access-g299r\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.208412 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.208436 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.209884 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.212436 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.212564 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.213215 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.213286 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.213341 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-config\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.214015 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.216452 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-config\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.234887 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g299r\" (UniqueName: \"kubernetes.io/projected/4132df39-cbe0-451d-9eae-39ea25e6ce18-kube-api-access-g299r\") pod \"dnsmasq-dns-fbc59fbb7-l8dd8\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.333890 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.338819 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.415890 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-dns-svc\") pod \"5688a156-f093-490c-832c-59254c10ba03\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.415955 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-sb\") pod \"5688a156-f093-490c-832c-59254c10ba03\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.416166 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-nb\") pod \"5688a156-f093-490c-832c-59254c10ba03\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.416204 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-config\") pod \"5688a156-f093-490c-832c-59254c10ba03\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.416246 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75jhh\" (UniqueName: \"kubernetes.io/projected/5688a156-f093-490c-832c-59254c10ba03-kube-api-access-75jhh\") pod \"5688a156-f093-490c-832c-59254c10ba03\" (UID: \"5688a156-f093-490c-832c-59254c10ba03\") " Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.436026 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5688a156-f093-490c-832c-59254c10ba03-kube-api-access-75jhh" (OuterVolumeSpecName: "kube-api-access-75jhh") pod "5688a156-f093-490c-832c-59254c10ba03" (UID: "5688a156-f093-490c-832c-59254c10ba03"). InnerVolumeSpecName "kube-api-access-75jhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.464648 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5688a156-f093-490c-832c-59254c10ba03" (UID: "5688a156-f093-490c-832c-59254c10ba03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.467222 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5688a156-f093-490c-832c-59254c10ba03" (UID: "5688a156-f093-490c-832c-59254c10ba03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.475002 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5688a156-f093-490c-832c-59254c10ba03" (UID: "5688a156-f093-490c-832c-59254c10ba03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.491408 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-config" (OuterVolumeSpecName: "config") pod "5688a156-f093-490c-832c-59254c10ba03" (UID: "5688a156-f093-490c-832c-59254c10ba03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.518427 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.518459 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.518469 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75jhh\" (UniqueName: \"kubernetes.io/projected/5688a156-f093-490c-832c-59254c10ba03-kube-api-access-75jhh\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.518479 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.518487 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5688a156-f093-490c-832c-59254c10ba03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.600680 4959 generic.go:334] "Generic (PLEG): container finished" podID="5688a156-f093-490c-832c-59254c10ba03" containerID="ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d" exitCode=0 Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.600729 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" event={"ID":"5688a156-f093-490c-832c-59254c10ba03","Type":"ContainerDied","Data":"ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d"} Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.600758 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" event={"ID":"5688a156-f093-490c-832c-59254c10ba03","Type":"ContainerDied","Data":"ac02e7d1ef085b45273439a9181766bfcb960a20d1767976062431fc5c8577fd"} Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.600776 4959 scope.go:117] "RemoveContainer" containerID="ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.600934 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-jc4q6" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.638568 4959 scope.go:117] "RemoveContainer" containerID="215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.646580 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-jc4q6"] Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.656630 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-jc4q6"] Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.657872 4959 scope.go:117] "RemoveContainer" containerID="ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d" Jan 21 13:34:26 crc kubenswrapper[4959]: E0121 13:34:26.658234 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d\": container with ID starting with ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d not found: ID does not exist" containerID="ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.658267 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d"} err="failed to get container status \"ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d\": rpc error: code = NotFound desc = could not find container \"ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d\": container with ID starting with ada8255c975c2341bd120a9628f7eed642aa17626348c132aca7998170df0a2d not found: ID does not exist" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.658291 4959 scope.go:117] "RemoveContainer" containerID="215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb" Jan 21 13:34:26 crc kubenswrapper[4959]: E0121 13:34:26.658577 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb\": container with ID starting with 215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb not found: ID does not exist" containerID="215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.658628 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb"} err="failed to get container status \"215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb\": rpc error: code = NotFound desc = could not find container \"215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb\": container with ID starting with 215f860dcb9126b3a332c1eb5875c79dad58b0ae7be5636930ffc5e4fc7489fb not found: ID does not exist" Jan 21 13:34:26 crc kubenswrapper[4959]: I0121 13:34:26.810098 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-l8dd8"] Jan 21 13:34:27 crc kubenswrapper[4959]: I0121 13:34:27.296768 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5688a156-f093-490c-832c-59254c10ba03" path="/var/lib/kubelet/pods/5688a156-f093-490c-832c-59254c10ba03/volumes" Jan 21 13:34:27 crc kubenswrapper[4959]: I0121 13:34:27.613309 4959 generic.go:334] "Generic (PLEG): container finished" podID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerID="321c6ce152359da16452cee6aa64e490fea9fa9526f2bbd8d2a795e543e33244" exitCode=0 Jan 21 13:34:27 crc kubenswrapper[4959]: I0121 13:34:27.613395 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" event={"ID":"4132df39-cbe0-451d-9eae-39ea25e6ce18","Type":"ContainerDied","Data":"321c6ce152359da16452cee6aa64e490fea9fa9526f2bbd8d2a795e543e33244"} Jan 21 13:34:27 crc kubenswrapper[4959]: I0121 13:34:27.613600 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" event={"ID":"4132df39-cbe0-451d-9eae-39ea25e6ce18","Type":"ContainerStarted","Data":"50b50d08adb282de5fbd841a9ef90ba2fab8b5cfa5b8928b7ecbdfdbfbc399dd"} Jan 21 13:34:28 crc kubenswrapper[4959]: I0121 13:34:28.624020 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" event={"ID":"4132df39-cbe0-451d-9eae-39ea25e6ce18","Type":"ContainerStarted","Data":"3d2a628cc215838558b7e209f43ee7c92f7379750aa178f0a7a44d784ffcf9d5"} Jan 21 13:34:28 crc kubenswrapper[4959]: I0121 13:34:28.625752 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:28 crc kubenswrapper[4959]: I0121 13:34:28.643270 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" podStartSLOduration=3.643252516 podStartE2EDuration="3.643252516s" podCreationTimestamp="2026-01-21 13:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:34:28.639658045 +0000 UTC m=+1529.602688618" watchObservedRunningTime="2026-01-21 13:34:28.643252516 +0000 UTC m=+1529.606283049" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.050429 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-msn27"] Jan 21 13:34:36 crc kubenswrapper[4959]: E0121 13:34:36.051450 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5688a156-f093-490c-832c-59254c10ba03" containerName="dnsmasq-dns" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.051471 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="5688a156-f093-490c-832c-59254c10ba03" containerName="dnsmasq-dns" Jan 21 13:34:36 crc kubenswrapper[4959]: E0121 13:34:36.051507 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5688a156-f093-490c-832c-59254c10ba03" containerName="init" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.051517 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="5688a156-f093-490c-832c-59254c10ba03" containerName="init" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.051722 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="5688a156-f093-490c-832c-59254c10ba03" containerName="dnsmasq-dns" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.053466 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.063204 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-msn27"] Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.137758 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-catalog-content\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.137849 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtl2\" (UniqueName: \"kubernetes.io/projected/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-kube-api-access-thtl2\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.138483 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-utilities\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.240695 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-utilities\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.240760 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-catalog-content\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.240862 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtl2\" (UniqueName: \"kubernetes.io/projected/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-kube-api-access-thtl2\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.241292 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-utilities\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.241498 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-catalog-content\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.261220 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtl2\" (UniqueName: \"kubernetes.io/projected/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-kube-api-access-thtl2\") pod \"redhat-marketplace-msn27\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.341213 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.377269 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.425252 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-h2scb"] Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.427460 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" podUID="33524250-7ea8-4691-93ce-46083025be47" containerName="dnsmasq-dns" containerID="cri-o://dc47d6c69099aec2a9d2687d74bc8a03e851d5b42abbf94439e4ec18e892d7cd" gracePeriod=10 Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.695053 4959 generic.go:334] "Generic (PLEG): container finished" podID="33524250-7ea8-4691-93ce-46083025be47" containerID="dc47d6c69099aec2a9d2687d74bc8a03e851d5b42abbf94439e4ec18e892d7cd" exitCode=0 Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.695376 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" event={"ID":"33524250-7ea8-4691-93ce-46083025be47","Type":"ContainerDied","Data":"dc47d6c69099aec2a9d2687d74bc8a03e851d5b42abbf94439e4ec18e892d7cd"} Jan 21 13:34:36 crc kubenswrapper[4959]: I0121 13:34:36.957219 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-msn27"] Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.039137 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.177590 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8djnz\" (UniqueName: \"kubernetes.io/projected/33524250-7ea8-4691-93ce-46083025be47-kube-api-access-8djnz\") pod \"33524250-7ea8-4691-93ce-46083025be47\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.177702 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-config\") pod \"33524250-7ea8-4691-93ce-46083025be47\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.177746 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-dns-svc\") pod \"33524250-7ea8-4691-93ce-46083025be47\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.177918 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-nb\") pod \"33524250-7ea8-4691-93ce-46083025be47\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.177957 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-openstack-edpm-ipam\") pod \"33524250-7ea8-4691-93ce-46083025be47\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.177972 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-sb\") pod \"33524250-7ea8-4691-93ce-46083025be47\" (UID: \"33524250-7ea8-4691-93ce-46083025be47\") " Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.183931 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33524250-7ea8-4691-93ce-46083025be47-kube-api-access-8djnz" (OuterVolumeSpecName: "kube-api-access-8djnz") pod "33524250-7ea8-4691-93ce-46083025be47" (UID: "33524250-7ea8-4691-93ce-46083025be47"). InnerVolumeSpecName "kube-api-access-8djnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.228474 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-config" (OuterVolumeSpecName: "config") pod "33524250-7ea8-4691-93ce-46083025be47" (UID: "33524250-7ea8-4691-93ce-46083025be47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.228877 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "33524250-7ea8-4691-93ce-46083025be47" (UID: "33524250-7ea8-4691-93ce-46083025be47"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.230024 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33524250-7ea8-4691-93ce-46083025be47" (UID: "33524250-7ea8-4691-93ce-46083025be47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.230363 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33524250-7ea8-4691-93ce-46083025be47" (UID: "33524250-7ea8-4691-93ce-46083025be47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.236243 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33524250-7ea8-4691-93ce-46083025be47" (UID: "33524250-7ea8-4691-93ce-46083025be47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.280970 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.281032 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.281049 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.281071 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8djnz\" (UniqueName: \"kubernetes.io/projected/33524250-7ea8-4691-93ce-46083025be47-kube-api-access-8djnz\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.281115 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.281135 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33524250-7ea8-4691-93ce-46083025be47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.705607 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" event={"ID":"33524250-7ea8-4691-93ce-46083025be47","Type":"ContainerDied","Data":"623f154ba81c25501a45cdfea58ea7a6c0025d05e52c4b48cf819e965368ca5b"} Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.705891 4959 scope.go:117] "RemoveContainer" containerID="dc47d6c69099aec2a9d2687d74bc8a03e851d5b42abbf94439e4ec18e892d7cd" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.706016 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-h2scb" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.709585 4959 generic.go:334] "Generic (PLEG): container finished" podID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerID="aaf2c20d5f88e6c0176741dbdbdbc189aef21b9770aa1f568956ea31c23f8c13" exitCode=0 Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.709619 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerDied","Data":"aaf2c20d5f88e6c0176741dbdbdbc189aef21b9770aa1f568956ea31c23f8c13"} Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.709644 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerStarted","Data":"b2f2c180524713b7dd99e1b1465ea3b84f2c3d18be07b4ed15c9a634b3c325a4"} Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.735203 4959 scope.go:117] "RemoveContainer" containerID="badc6e1edf949e332b8169ca68858f42831ecbbd79864a467c17a5d487b64663" Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.774344 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-h2scb"] Jan 21 13:34:37 crc kubenswrapper[4959]: I0121 13:34:37.781256 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-h2scb"] Jan 21 13:34:38 crc kubenswrapper[4959]: I0121 13:34:38.721739 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerStarted","Data":"5b9785d89d6e9a55db27e4ad7e6338f1cec9a56a48e1253ab549abd792c4f303"} Jan 21 13:34:39 crc kubenswrapper[4959]: I0121 13:34:39.294367 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33524250-7ea8-4691-93ce-46083025be47" path="/var/lib/kubelet/pods/33524250-7ea8-4691-93ce-46083025be47/volumes" Jan 21 13:34:39 crc kubenswrapper[4959]: I0121 13:34:39.731638 4959 generic.go:334] "Generic (PLEG): container finished" podID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerID="5b9785d89d6e9a55db27e4ad7e6338f1cec9a56a48e1253ab549abd792c4f303" exitCode=0 Jan 21 13:34:39 crc kubenswrapper[4959]: I0121 13:34:39.731717 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerDied","Data":"5b9785d89d6e9a55db27e4ad7e6338f1cec9a56a48e1253ab549abd792c4f303"} Jan 21 13:34:40 crc kubenswrapper[4959]: I0121 13:34:40.743154 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerStarted","Data":"f642c2efc8d06b6adc3ed81dbe30f3aeb2f2cf8ce3c41085335416c3d4190068"} Jan 21 13:34:40 crc kubenswrapper[4959]: I0121 13:34:40.769854 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-msn27" podStartSLOduration=2.129253657 podStartE2EDuration="4.76983055s" podCreationTimestamp="2026-01-21 13:34:36 +0000 UTC" firstStartedPulling="2026-01-21 13:34:37.711488463 +0000 UTC m=+1538.674519006" lastFinishedPulling="2026-01-21 13:34:40.352065336 +0000 UTC m=+1541.315095899" observedRunningTime="2026-01-21 13:34:40.763861642 +0000 UTC m=+1541.726892185" watchObservedRunningTime="2026-01-21 13:34:40.76983055 +0000 UTC m=+1541.732861093" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.382504 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.383063 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.444817 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.470526 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts"] Jan 21 13:34:46 crc kubenswrapper[4959]: E0121 13:34:46.471116 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33524250-7ea8-4691-93ce-46083025be47" containerName="init" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.471137 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="33524250-7ea8-4691-93ce-46083025be47" containerName="init" Jan 21 13:34:46 crc kubenswrapper[4959]: E0121 13:34:46.471193 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33524250-7ea8-4691-93ce-46083025be47" containerName="dnsmasq-dns" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.471203 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="33524250-7ea8-4691-93ce-46083025be47" containerName="dnsmasq-dns" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.471397 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="33524250-7ea8-4691-93ce-46083025be47" containerName="dnsmasq-dns" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.472617 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.475237 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.475838 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.476959 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.483029 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.484427 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts"] Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.633667 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.633766 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.633804 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfpl\" (UniqueName: \"kubernetes.io/projected/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-kube-api-access-mtfpl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.633922 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.735532 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.735817 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.735856 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfpl\" (UniqueName: \"kubernetes.io/projected/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-kube-api-access-mtfpl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.736329 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.741712 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.742343 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.742809 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.755004 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfpl\" (UniqueName: \"kubernetes.io/projected/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-kube-api-access-mtfpl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.795669 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.876628 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:46 crc kubenswrapper[4959]: I0121 13:34:46.934033 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-msn27"] Jan 21 13:34:47 crc kubenswrapper[4959]: I0121 13:34:47.340919 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts"] Jan 21 13:34:47 crc kubenswrapper[4959]: I0121 13:34:47.819016 4959 generic.go:334] "Generic (PLEG): container finished" podID="98e47fb2-a96f-4e35-8d32-1226689833b0" containerID="34cfb1e33ea34d6895bce7d17958aed9e429fa1f3465c34aa615c635f2b989c2" exitCode=0 Jan 21 13:34:47 crc kubenswrapper[4959]: I0121 13:34:47.819190 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98e47fb2-a96f-4e35-8d32-1226689833b0","Type":"ContainerDied","Data":"34cfb1e33ea34d6895bce7d17958aed9e429fa1f3465c34aa615c635f2b989c2"} Jan 21 13:34:47 crc kubenswrapper[4959]: I0121 13:34:47.821634 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" event={"ID":"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1","Type":"ContainerStarted","Data":"1fb52ba00877d1b34d0c5aaa464ff6f3cf44d8446cd486035648af9996e4642c"} Jan 21 13:34:48 crc kubenswrapper[4959]: I0121 13:34:48.832175 4959 generic.go:334] "Generic (PLEG): container finished" podID="d94ce670-7f1f-426a-a78f-5b62cf5919cf" containerID="04f54ea73b84954c72260f8819d6b402ae6ccd6e802dc6af7581d52fd8150063" exitCode=0 Jan 21 13:34:48 crc kubenswrapper[4959]: I0121 13:34:48.832220 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d94ce670-7f1f-426a-a78f-5b62cf5919cf","Type":"ContainerDied","Data":"04f54ea73b84954c72260f8819d6b402ae6ccd6e802dc6af7581d52fd8150063"} Jan 21 13:34:48 crc kubenswrapper[4959]: I0121 13:34:48.835576 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-msn27" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="registry-server" containerID="cri-o://f642c2efc8d06b6adc3ed81dbe30f3aeb2f2cf8ce3c41085335416c3d4190068" gracePeriod=2 Jan 21 13:34:48 crc kubenswrapper[4959]: I0121 13:34:48.835679 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"98e47fb2-a96f-4e35-8d32-1226689833b0","Type":"ContainerStarted","Data":"fb238559a387904f13e7340d5e6d8e6bc568ae811196d6b87411702b74382876"} Jan 21 13:34:48 crc kubenswrapper[4959]: I0121 13:34:48.836489 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:34:48 crc kubenswrapper[4959]: I0121 13:34:48.908361 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.908338216 podStartE2EDuration="36.908338216s" podCreationTimestamp="2026-01-21 13:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:34:48.898084297 +0000 UTC m=+1549.861114870" watchObservedRunningTime="2026-01-21 13:34:48.908338216 +0000 UTC m=+1549.871368769" Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.855442 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d94ce670-7f1f-426a-a78f-5b62cf5919cf","Type":"ContainerStarted","Data":"075e62c58b6dec3ac84ca02002bf1201cfe246dce5cc4df8c6e173c52545cc92"} Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.857179 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.862955 4959 generic.go:334] "Generic (PLEG): container finished" podID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerID="f642c2efc8d06b6adc3ed81dbe30f3aeb2f2cf8ce3c41085335416c3d4190068" exitCode=0 Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.863051 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerDied","Data":"f642c2efc8d06b6adc3ed81dbe30f3aeb2f2cf8ce3c41085335416c3d4190068"} Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.863113 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-msn27" event={"ID":"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc","Type":"ContainerDied","Data":"b2f2c180524713b7dd99e1b1465ea3b84f2c3d18be07b4ed15c9a634b3c325a4"} Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.863126 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f2c180524713b7dd99e1b1465ea3b84f2c3d18be07b4ed15c9a634b3c325a4" Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.885300 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.885282715 podStartE2EDuration="38.885282715s" podCreationTimestamp="2026-01-21 13:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:34:49.883723851 +0000 UTC m=+1550.846754384" watchObservedRunningTime="2026-01-21 13:34:49.885282715 +0000 UTC m=+1550.848313258" Jan 21 13:34:49 crc kubenswrapper[4959]: I0121 13:34:49.921371 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.001386 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thtl2\" (UniqueName: \"kubernetes.io/projected/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-kube-api-access-thtl2\") pod \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.001723 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-utilities\") pod \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.001869 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-catalog-content\") pod \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\" (UID: \"3bfdd439-0dc1-4c52-9f62-205cc2c51cbc\") " Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.002660 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-utilities" (OuterVolumeSpecName: "utilities") pod "3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" (UID: "3bfdd439-0dc1-4c52-9f62-205cc2c51cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.003030 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.007260 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-kube-api-access-thtl2" (OuterVolumeSpecName: "kube-api-access-thtl2") pod "3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" (UID: "3bfdd439-0dc1-4c52-9f62-205cc2c51cbc"). InnerVolumeSpecName "kube-api-access-thtl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.039328 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" (UID: "3bfdd439-0dc1-4c52-9f62-205cc2c51cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.105337 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thtl2\" (UniqueName: \"kubernetes.io/projected/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-kube-api-access-thtl2\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.105375 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.876666 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-msn27" Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.911873 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-msn27"] Jan 21 13:34:50 crc kubenswrapper[4959]: I0121 13:34:50.923791 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-msn27"] Jan 21 13:34:51 crc kubenswrapper[4959]: I0121 13:34:51.298821 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" path="/var/lib/kubelet/pods/3bfdd439-0dc1-4c52-9f62-205cc2c51cbc/volumes" Jan 21 13:34:51 crc kubenswrapper[4959]: I0121 13:34:51.379551 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:34:51 crc kubenswrapper[4959]: I0121 13:34:51.379605 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:34:51 crc kubenswrapper[4959]: I0121 13:34:51.379647 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:34:51 crc kubenswrapper[4959]: I0121 13:34:51.380359 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:34:51 crc kubenswrapper[4959]: I0121 13:34:51.380416 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" gracePeriod=600 Jan 21 13:34:52 crc kubenswrapper[4959]: E0121 13:34:52.005568 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:34:52 crc kubenswrapper[4959]: I0121 13:34:52.904027 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" exitCode=0 Jan 21 13:34:52 crc kubenswrapper[4959]: I0121 13:34:52.904070 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a"} Jan 21 13:34:52 crc kubenswrapper[4959]: I0121 13:34:52.904113 4959 scope.go:117] "RemoveContainer" containerID="8a84446f54fbcdd9e945dd5ad114c0f8a1adc39825215bb3644cea7a3988b06e" Jan 21 13:34:52 crc kubenswrapper[4959]: I0121 13:34:52.904706 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:34:52 crc kubenswrapper[4959]: E0121 13:34:52.905165 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.660137 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x6hhl"] Jan 21 13:34:55 crc kubenswrapper[4959]: E0121 13:34:55.660872 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="registry-server" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.660887 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="registry-server" Jan 21 13:34:55 crc kubenswrapper[4959]: E0121 13:34:55.660923 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="extract-utilities" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.660929 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="extract-utilities" Jan 21 13:34:55 crc kubenswrapper[4959]: E0121 13:34:55.660942 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="extract-content" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.660948 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="extract-content" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.661141 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfdd439-0dc1-4c52-9f62-205cc2c51cbc" containerName="registry-server" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.663784 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.683864 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6hhl"] Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.813626 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-catalog-content\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.813925 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdcv\" (UniqueName: \"kubernetes.io/projected/ae3da170-1e40-4440-9614-c6a1f23c1901-kube-api-access-ngdcv\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.814166 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-utilities\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.916071 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-catalog-content\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.916168 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdcv\" (UniqueName: \"kubernetes.io/projected/ae3da170-1e40-4440-9614-c6a1f23c1901-kube-api-access-ngdcv\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.916227 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-utilities\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.916629 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-catalog-content\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.916656 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-utilities\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:55 crc kubenswrapper[4959]: I0121 13:34:55.944144 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdcv\" (UniqueName: \"kubernetes.io/projected/ae3da170-1e40-4440-9614-c6a1f23c1901-kube-api-access-ngdcv\") pod \"community-operators-x6hhl\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:56 crc kubenswrapper[4959]: I0121 13:34:56.002620 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:34:59 crc kubenswrapper[4959]: I0121 13:34:59.728634 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:35:00 crc kubenswrapper[4959]: I0121 13:35:00.181040 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6hhl"] Jan 21 13:35:00 crc kubenswrapper[4959]: W0121 13:35:00.192035 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae3da170_1e40_4440_9614_c6a1f23c1901.slice/crio-45100761513fe1d1f513b2651bb300cf33f25ecd0d221a6bbbd841b03f952b30 WatchSource:0}: Error finding container 45100761513fe1d1f513b2651bb300cf33f25ecd0d221a6bbbd841b03f952b30: Status 404 returned error can't find the container with id 45100761513fe1d1f513b2651bb300cf33f25ecd0d221a6bbbd841b03f952b30 Jan 21 13:35:00 crc kubenswrapper[4959]: I0121 13:35:00.992588 4959 generic.go:334] "Generic (PLEG): container finished" podID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerID="cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0" exitCode=0 Jan 21 13:35:00 crc kubenswrapper[4959]: I0121 13:35:00.992686 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6hhl" event={"ID":"ae3da170-1e40-4440-9614-c6a1f23c1901","Type":"ContainerDied","Data":"cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0"} Jan 21 13:35:00 crc kubenswrapper[4959]: I0121 13:35:00.992934 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6hhl" event={"ID":"ae3da170-1e40-4440-9614-c6a1f23c1901","Type":"ContainerStarted","Data":"45100761513fe1d1f513b2651bb300cf33f25ecd0d221a6bbbd841b03f952b30"} Jan 21 13:35:00 crc kubenswrapper[4959]: I0121 13:35:00.994908 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" event={"ID":"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1","Type":"ContainerStarted","Data":"4dfb5ce5974fee8b569de5047c4e7d7f224e13856ad62f29f744840e00b5893c"} Jan 21 13:35:01 crc kubenswrapper[4959]: I0121 13:35:01.042971 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" podStartSLOduration=2.7115202050000002 podStartE2EDuration="15.042952226s" podCreationTimestamp="2026-01-21 13:34:46 +0000 UTC" firstStartedPulling="2026-01-21 13:34:47.38705112 +0000 UTC m=+1548.350081663" lastFinishedPulling="2026-01-21 13:34:59.718483141 +0000 UTC m=+1560.681513684" observedRunningTime="2026-01-21 13:35:01.039220711 +0000 UTC m=+1562.002251264" watchObservedRunningTime="2026-01-21 13:35:01.042952226 +0000 UTC m=+1562.005982769" Jan 21 13:35:01 crc kubenswrapper[4959]: I0121 13:35:01.962339 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 13:35:02 crc kubenswrapper[4959]: I0121 13:35:02.886992 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:35:03 crc kubenswrapper[4959]: I0121 13:35:03.013981 4959 generic.go:334] "Generic (PLEG): container finished" podID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerID="95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4" exitCode=0 Jan 21 13:35:03 crc kubenswrapper[4959]: I0121 13:35:03.014191 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6hhl" event={"ID":"ae3da170-1e40-4440-9614-c6a1f23c1901","Type":"ContainerDied","Data":"95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4"} Jan 21 13:35:04 crc kubenswrapper[4959]: I0121 13:35:04.026591 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6hhl" event={"ID":"ae3da170-1e40-4440-9614-c6a1f23c1901","Type":"ContainerStarted","Data":"c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524"} Jan 21 13:35:04 crc kubenswrapper[4959]: I0121 13:35:04.047302 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x6hhl" podStartSLOduration=6.607879533 podStartE2EDuration="9.047282542s" podCreationTimestamp="2026-01-21 13:34:55 +0000 UTC" firstStartedPulling="2026-01-21 13:35:00.995333695 +0000 UTC m=+1561.958364228" lastFinishedPulling="2026-01-21 13:35:03.434736694 +0000 UTC m=+1564.397767237" observedRunningTime="2026-01-21 13:35:04.045197784 +0000 UTC m=+1565.008228327" watchObservedRunningTime="2026-01-21 13:35:04.047282542 +0000 UTC m=+1565.010313085" Jan 21 13:35:06 crc kubenswrapper[4959]: I0121 13:35:06.003467 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:35:06 crc kubenswrapper[4959]: I0121 13:35:06.003519 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:35:06 crc kubenswrapper[4959]: I0121 13:35:06.050815 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:35:07 crc kubenswrapper[4959]: I0121 13:35:07.285870 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:35:07 crc kubenswrapper[4959]: E0121 13:35:07.286559 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:35:11 crc kubenswrapper[4959]: I0121 13:35:11.082653 4959 generic.go:334] "Generic (PLEG): container finished" podID="dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" containerID="4dfb5ce5974fee8b569de5047c4e7d7f224e13856ad62f29f744840e00b5893c" exitCode=0 Jan 21 13:35:11 crc kubenswrapper[4959]: I0121 13:35:11.082717 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" event={"ID":"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1","Type":"ContainerDied","Data":"4dfb5ce5974fee8b569de5047c4e7d7f224e13856ad62f29f744840e00b5893c"} Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.512232 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.644236 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtfpl\" (UniqueName: \"kubernetes.io/projected/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-kube-api-access-mtfpl\") pod \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.644312 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-inventory\") pod \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.644385 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-repo-setup-combined-ca-bundle\") pod \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.644464 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-ssh-key-openstack-edpm-ipam\") pod \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\" (UID: \"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1\") " Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.649858 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" (UID: "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.651538 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-kube-api-access-mtfpl" (OuterVolumeSpecName: "kube-api-access-mtfpl") pod "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" (UID: "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1"). InnerVolumeSpecName "kube-api-access-mtfpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.671633 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" (UID: "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.671764 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-inventory" (OuterVolumeSpecName: "inventory") pod "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" (UID: "dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.746613 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtfpl\" (UniqueName: \"kubernetes.io/projected/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-kube-api-access-mtfpl\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.746638 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.746647 4959 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:12 crc kubenswrapper[4959]: I0121 13:35:12.746657 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.108717 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" event={"ID":"dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1","Type":"ContainerDied","Data":"1fb52ba00877d1b34d0c5aaa464ff6f3cf44d8446cd486035648af9996e4642c"} Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.108778 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fb52ba00877d1b34d0c5aaa464ff6f3cf44d8446cd486035648af9996e4642c" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.108817 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.217205 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7"] Jan 21 13:35:13 crc kubenswrapper[4959]: E0121 13:35:13.217673 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.217696 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.217910 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.219210 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.221000 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.221066 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.221150 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.221271 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.226697 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7"] Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.356489 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.356674 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.356723 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvgx\" (UniqueName: \"kubernetes.io/projected/6c764408-7cb2-4537-b591-626ea5924406-kube-api-access-xfvgx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.356787 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.457842 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.457995 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.458196 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.458268 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfvgx\" (UniqueName: \"kubernetes.io/projected/6c764408-7cb2-4537-b591-626ea5924406-kube-api-access-xfvgx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.462196 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.462364 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.463946 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.482904 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfvgx\" (UniqueName: \"kubernetes.io/projected/6c764408-7cb2-4537-b591-626ea5924406-kube-api-access-xfvgx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-944s7\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:13 crc kubenswrapper[4959]: I0121 13:35:13.567765 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:35:14 crc kubenswrapper[4959]: I0121 13:35:14.094822 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7"] Jan 21 13:35:14 crc kubenswrapper[4959]: I0121 13:35:14.118725 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" event={"ID":"6c764408-7cb2-4537-b591-626ea5924406","Type":"ContainerStarted","Data":"b91548597cb27b82d3ab5743457151d51a81642a217daeae6627d81383277871"} Jan 21 13:35:15 crc kubenswrapper[4959]: I0121 13:35:15.127313 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" event={"ID":"6c764408-7cb2-4537-b591-626ea5924406","Type":"ContainerStarted","Data":"81c13dad251e94bac395b7b7c1210b6eff9c3ab41d922083610eb2dbd6e79cc8"} Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.051525 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.069170 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" podStartSLOduration=2.657473938 podStartE2EDuration="3.06915153s" podCreationTimestamp="2026-01-21 13:35:13 +0000 UTC" firstStartedPulling="2026-01-21 13:35:14.096503223 +0000 UTC m=+1575.059533766" lastFinishedPulling="2026-01-21 13:35:14.508180795 +0000 UTC m=+1575.471211358" observedRunningTime="2026-01-21 13:35:15.150274806 +0000 UTC m=+1576.113305349" watchObservedRunningTime="2026-01-21 13:35:16.06915153 +0000 UTC m=+1577.032182073" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.101317 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6hhl"] Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.135780 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x6hhl" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="registry-server" containerID="cri-o://c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524" gracePeriod=2 Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.585020 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.747342 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-catalog-content\") pod \"ae3da170-1e40-4440-9614-c6a1f23c1901\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.747474 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-utilities\") pod \"ae3da170-1e40-4440-9614-c6a1f23c1901\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.747542 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdcv\" (UniqueName: \"kubernetes.io/projected/ae3da170-1e40-4440-9614-c6a1f23c1901-kube-api-access-ngdcv\") pod \"ae3da170-1e40-4440-9614-c6a1f23c1901\" (UID: \"ae3da170-1e40-4440-9614-c6a1f23c1901\") " Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.748687 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-utilities" (OuterVolumeSpecName: "utilities") pod "ae3da170-1e40-4440-9614-c6a1f23c1901" (UID: "ae3da170-1e40-4440-9614-c6a1f23c1901"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.752952 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3da170-1e40-4440-9614-c6a1f23c1901-kube-api-access-ngdcv" (OuterVolumeSpecName: "kube-api-access-ngdcv") pod "ae3da170-1e40-4440-9614-c6a1f23c1901" (UID: "ae3da170-1e40-4440-9614-c6a1f23c1901"). InnerVolumeSpecName "kube-api-access-ngdcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.795486 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae3da170-1e40-4440-9614-c6a1f23c1901" (UID: "ae3da170-1e40-4440-9614-c6a1f23c1901"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.850222 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.850259 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdcv\" (UniqueName: \"kubernetes.io/projected/ae3da170-1e40-4440-9614-c6a1f23c1901-kube-api-access-ngdcv\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:16 crc kubenswrapper[4959]: I0121 13:35:16.850270 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3da170-1e40-4440-9614-c6a1f23c1901-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.148790 4959 generic.go:334] "Generic (PLEG): container finished" podID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerID="c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524" exitCode=0 Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.148852 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6hhl" event={"ID":"ae3da170-1e40-4440-9614-c6a1f23c1901","Type":"ContainerDied","Data":"c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524"} Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.148886 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6hhl" event={"ID":"ae3da170-1e40-4440-9614-c6a1f23c1901","Type":"ContainerDied","Data":"45100761513fe1d1f513b2651bb300cf33f25ecd0d221a6bbbd841b03f952b30"} Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.148908 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6hhl" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.148912 4959 scope.go:117] "RemoveContainer" containerID="c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.177545 4959 scope.go:117] "RemoveContainer" containerID="95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.198008 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x6hhl"] Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.200934 4959 scope.go:117] "RemoveContainer" containerID="cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.209540 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x6hhl"] Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.242960 4959 scope.go:117] "RemoveContainer" containerID="c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524" Jan 21 13:35:17 crc kubenswrapper[4959]: E0121 13:35:17.243495 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524\": container with ID starting with c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524 not found: ID does not exist" containerID="c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.243537 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524"} err="failed to get container status \"c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524\": rpc error: code = NotFound desc = could not find container \"c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524\": container with ID starting with c35a57a0e78fcbc647aa859e33391b5ae2401203f41bbcd833fe92cfffcd3524 not found: ID does not exist" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.243562 4959 scope.go:117] "RemoveContainer" containerID="95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4" Jan 21 13:35:17 crc kubenswrapper[4959]: E0121 13:35:17.243845 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4\": container with ID starting with 95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4 not found: ID does not exist" containerID="95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.243870 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4"} err="failed to get container status \"95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4\": rpc error: code = NotFound desc = could not find container \"95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4\": container with ID starting with 95ccc9d73dc7d5fb5063fe7392bd2f51b9f84c7e18325b78ed41a6f07c4cc6e4 not found: ID does not exist" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.243885 4959 scope.go:117] "RemoveContainer" containerID="cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0" Jan 21 13:35:17 crc kubenswrapper[4959]: E0121 13:35:17.244143 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0\": container with ID starting with cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0 not found: ID does not exist" containerID="cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.244183 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0"} err="failed to get container status \"cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0\": rpc error: code = NotFound desc = could not find container \"cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0\": container with ID starting with cc1ed925d6bc072fd5ba65dddac845bddf3bae303a541ff997c70222ce9426e0 not found: ID does not exist" Jan 21 13:35:17 crc kubenswrapper[4959]: I0121 13:35:17.295371 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" path="/var/lib/kubelet/pods/ae3da170-1e40-4440-9614-c6a1f23c1901/volumes" Jan 21 13:35:21 crc kubenswrapper[4959]: I0121 13:35:21.285850 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:35:21 crc kubenswrapper[4959]: E0121 13:35:21.286717 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:35:33 crc kubenswrapper[4959]: I0121 13:35:33.231458 4959 scope.go:117] "RemoveContainer" containerID="6997fc58606eaf1282588ccb98c7f9e513bb3a0183edc9aae6c8bc0175080894" Jan 21 13:35:33 crc kubenswrapper[4959]: I0121 13:35:33.266843 4959 scope.go:117] "RemoveContainer" containerID="f80f480d3f2f809562c976c8560d795be5ea388ff94cb11c39a2f69db9d1a11f" Jan 21 13:35:36 crc kubenswrapper[4959]: I0121 13:35:36.286090 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:35:36 crc kubenswrapper[4959]: E0121 13:35:36.286540 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:35:47 crc kubenswrapper[4959]: I0121 13:35:47.286466 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:35:47 crc kubenswrapper[4959]: E0121 13:35:47.287182 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:36:01 crc kubenswrapper[4959]: I0121 13:36:01.285888 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:36:01 crc kubenswrapper[4959]: E0121 13:36:01.286759 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.315419 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4lbts"] Jan 21 13:36:09 crc kubenswrapper[4959]: E0121 13:36:09.316185 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="registry-server" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.316197 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="registry-server" Jan 21 13:36:09 crc kubenswrapper[4959]: E0121 13:36:09.316209 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="extract-content" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.316216 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="extract-content" Jan 21 13:36:09 crc kubenswrapper[4959]: E0121 13:36:09.316229 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="extract-utilities" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.316236 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="extract-utilities" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.316471 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3da170-1e40-4440-9614-c6a1f23c1901" containerName="registry-server" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.318406 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.337442 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4lbts"] Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.482208 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/da5f452c-2af0-4723-95e7-6504d0b616b3-kube-api-access-5mm4f\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.482546 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-utilities\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.482656 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-catalog-content\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.600219 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-utilities\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.600276 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-catalog-content\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.600580 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/da5f452c-2af0-4723-95e7-6504d0b616b3-kube-api-access-5mm4f\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.600944 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-catalog-content\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.600953 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-utilities\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.625823 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/da5f452c-2af0-4723-95e7-6504d0b616b3-kube-api-access-5mm4f\") pod \"certified-operators-4lbts\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:09 crc kubenswrapper[4959]: I0121 13:36:09.654798 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:10 crc kubenswrapper[4959]: I0121 13:36:10.164840 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4lbts"] Jan 21 13:36:10 crc kubenswrapper[4959]: I0121 13:36:10.632126 4959 generic.go:334] "Generic (PLEG): container finished" podID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerID="94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766" exitCode=0 Jan 21 13:36:10 crc kubenswrapper[4959]: I0121 13:36:10.632233 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lbts" event={"ID":"da5f452c-2af0-4723-95e7-6504d0b616b3","Type":"ContainerDied","Data":"94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766"} Jan 21 13:36:10 crc kubenswrapper[4959]: I0121 13:36:10.634904 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lbts" event={"ID":"da5f452c-2af0-4723-95e7-6504d0b616b3","Type":"ContainerStarted","Data":"31d701eba678cb119cd644678c415af36e7ae060f3105981f3533e04e8d087e4"} Jan 21 13:36:12 crc kubenswrapper[4959]: I0121 13:36:12.652780 4959 generic.go:334] "Generic (PLEG): container finished" podID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerID="d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6" exitCode=0 Jan 21 13:36:12 crc kubenswrapper[4959]: I0121 13:36:12.653035 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lbts" event={"ID":"da5f452c-2af0-4723-95e7-6504d0b616b3","Type":"ContainerDied","Data":"d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6"} Jan 21 13:36:13 crc kubenswrapper[4959]: I0121 13:36:13.661875 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lbts" event={"ID":"da5f452c-2af0-4723-95e7-6504d0b616b3","Type":"ContainerStarted","Data":"0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066"} Jan 21 13:36:13 crc kubenswrapper[4959]: I0121 13:36:13.683589 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4lbts" podStartSLOduration=2.235753362 podStartE2EDuration="4.683569301s" podCreationTimestamp="2026-01-21 13:36:09 +0000 UTC" firstStartedPulling="2026-01-21 13:36:10.642301563 +0000 UTC m=+1631.605332116" lastFinishedPulling="2026-01-21 13:36:13.090117512 +0000 UTC m=+1634.053148055" observedRunningTime="2026-01-21 13:36:13.675838128 +0000 UTC m=+1634.638868691" watchObservedRunningTime="2026-01-21 13:36:13.683569301 +0000 UTC m=+1634.646599854" Jan 21 13:36:14 crc kubenswrapper[4959]: I0121 13:36:14.285846 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:36:14 crc kubenswrapper[4959]: E0121 13:36:14.286160 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:36:19 crc kubenswrapper[4959]: I0121 13:36:19.654954 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:19 crc kubenswrapper[4959]: I0121 13:36:19.656616 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:19 crc kubenswrapper[4959]: I0121 13:36:19.700081 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:19 crc kubenswrapper[4959]: I0121 13:36:19.758154 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:19 crc kubenswrapper[4959]: I0121 13:36:19.938790 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4lbts"] Jan 21 13:36:21 crc kubenswrapper[4959]: I0121 13:36:21.732843 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4lbts" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="registry-server" containerID="cri-o://0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066" gracePeriod=2 Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.260718 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.358963 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/da5f452c-2af0-4723-95e7-6504d0b616b3-kube-api-access-5mm4f\") pod \"da5f452c-2af0-4723-95e7-6504d0b616b3\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.359026 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-utilities\") pod \"da5f452c-2af0-4723-95e7-6504d0b616b3\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.359254 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-catalog-content\") pod \"da5f452c-2af0-4723-95e7-6504d0b616b3\" (UID: \"da5f452c-2af0-4723-95e7-6504d0b616b3\") " Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.360223 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-utilities" (OuterVolumeSpecName: "utilities") pod "da5f452c-2af0-4723-95e7-6504d0b616b3" (UID: "da5f452c-2af0-4723-95e7-6504d0b616b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.364760 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5f452c-2af0-4723-95e7-6504d0b616b3-kube-api-access-5mm4f" (OuterVolumeSpecName: "kube-api-access-5mm4f") pod "da5f452c-2af0-4723-95e7-6504d0b616b3" (UID: "da5f452c-2af0-4723-95e7-6504d0b616b3"). InnerVolumeSpecName "kube-api-access-5mm4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.462674 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mm4f\" (UniqueName: \"kubernetes.io/projected/da5f452c-2af0-4723-95e7-6504d0b616b3-kube-api-access-5mm4f\") on node \"crc\" DevicePath \"\"" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.462734 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.732670 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da5f452c-2af0-4723-95e7-6504d0b616b3" (UID: "da5f452c-2af0-4723-95e7-6504d0b616b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.751992 4959 generic.go:334] "Generic (PLEG): container finished" podID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerID="0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066" exitCode=0 Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.752057 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lbts" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.752198 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lbts" event={"ID":"da5f452c-2af0-4723-95e7-6504d0b616b3","Type":"ContainerDied","Data":"0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066"} Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.752268 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lbts" event={"ID":"da5f452c-2af0-4723-95e7-6504d0b616b3","Type":"ContainerDied","Data":"31d701eba678cb119cd644678c415af36e7ae060f3105981f3533e04e8d087e4"} Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.752294 4959 scope.go:117] "RemoveContainer" containerID="0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.778588 4959 scope.go:117] "RemoveContainer" containerID="d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.789748 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da5f452c-2af0-4723-95e7-6504d0b616b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.794000 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4lbts"] Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.803848 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4lbts"] Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.809884 4959 scope.go:117] "RemoveContainer" containerID="94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.838052 4959 scope.go:117] "RemoveContainer" containerID="0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066" Jan 21 13:36:22 crc kubenswrapper[4959]: E0121 13:36:22.838452 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066\": container with ID starting with 0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066 not found: ID does not exist" containerID="0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.838480 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066"} err="failed to get container status \"0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066\": rpc error: code = NotFound desc = could not find container \"0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066\": container with ID starting with 0b91d94aeb5450687b408087b8c5037f7cdefe100e87dae55f3b63d149438066 not found: ID does not exist" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.838499 4959 scope.go:117] "RemoveContainer" containerID="d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6" Jan 21 13:36:22 crc kubenswrapper[4959]: E0121 13:36:22.838847 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6\": container with ID starting with d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6 not found: ID does not exist" containerID="d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.838869 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6"} err="failed to get container status \"d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6\": rpc error: code = NotFound desc = could not find container \"d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6\": container with ID starting with d737942122a11fbbf8f9ee3cdcb1f92cf97c66ef53a990caa2eeaf7fe08fd2c6 not found: ID does not exist" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.838885 4959 scope.go:117] "RemoveContainer" containerID="94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766" Jan 21 13:36:22 crc kubenswrapper[4959]: E0121 13:36:22.839283 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766\": container with ID starting with 94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766 not found: ID does not exist" containerID="94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766" Jan 21 13:36:22 crc kubenswrapper[4959]: I0121 13:36:22.839440 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766"} err="failed to get container status \"94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766\": rpc error: code = NotFound desc = could not find container \"94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766\": container with ID starting with 94a9f0b88fb64bc4ed9c96af2531a96314dbef46803dd65bd96083504333d766 not found: ID does not exist" Jan 21 13:36:23 crc kubenswrapper[4959]: I0121 13:36:23.296357 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" path="/var/lib/kubelet/pods/da5f452c-2af0-4723-95e7-6504d0b616b3/volumes" Jan 21 13:36:28 crc kubenswrapper[4959]: I0121 13:36:28.286344 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:36:28 crc kubenswrapper[4959]: E0121 13:36:28.287223 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:36:33 crc kubenswrapper[4959]: I0121 13:36:33.382664 4959 scope.go:117] "RemoveContainer" containerID="087eb15ea62d6c73abf255de8ceb2bffbebc639c9f5f646aef7e24a06d4ee1b8" Jan 21 13:36:42 crc kubenswrapper[4959]: I0121 13:36:42.286480 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:36:42 crc kubenswrapper[4959]: E0121 13:36:42.289916 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:36:57 crc kubenswrapper[4959]: I0121 13:36:57.287530 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:36:57 crc kubenswrapper[4959]: E0121 13:36:57.288373 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:37:11 crc kubenswrapper[4959]: I0121 13:37:11.286494 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:37:11 crc kubenswrapper[4959]: E0121 13:37:11.287392 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:37:22 crc kubenswrapper[4959]: I0121 13:37:22.285838 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:37:22 crc kubenswrapper[4959]: E0121 13:37:22.286720 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:37:34 crc kubenswrapper[4959]: I0121 13:37:34.286840 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:37:34 crc kubenswrapper[4959]: E0121 13:37:34.287949 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:37:45 crc kubenswrapper[4959]: I0121 13:37:45.288688 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:37:45 crc kubenswrapper[4959]: E0121 13:37:45.290088 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:37:58 crc kubenswrapper[4959]: I0121 13:37:58.286326 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:37:58 crc kubenswrapper[4959]: E0121 13:37:58.287356 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:38:11 crc kubenswrapper[4959]: I0121 13:38:11.301193 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:38:11 crc kubenswrapper[4959]: E0121 13:38:11.304230 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:38:25 crc kubenswrapper[4959]: I0121 13:38:25.286178 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:38:25 crc kubenswrapper[4959]: E0121 13:38:25.286936 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:38:34 crc kubenswrapper[4959]: I0121 13:38:34.052874 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gmt72"] Jan 21 13:38:34 crc kubenswrapper[4959]: I0121 13:38:34.066849 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-93be-account-create-update-nzrqj"] Jan 21 13:38:34 crc kubenswrapper[4959]: I0121 13:38:34.079858 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gmt72"] Jan 21 13:38:34 crc kubenswrapper[4959]: I0121 13:38:34.089639 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-93be-account-create-update-nzrqj"] Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.030109 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-l5zlf"] Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.039916 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6593-account-create-update-7p2kt"] Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.050872 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-l5zlf"] Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.060695 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6593-account-create-update-7p2kt"] Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.297509 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d7d70b3-f5c3-4257-b166-8883de61c0b3" path="/var/lib/kubelet/pods/0d7d70b3-f5c3-4257-b166-8883de61c0b3/volumes" Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.298704 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95967790-40ad-4e28-a454-277919176550" path="/var/lib/kubelet/pods/95967790-40ad-4e28-a454-277919176550/volumes" Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.299766 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab805b64-b362-482a-9421-0e75b98afbdc" path="/var/lib/kubelet/pods/ab805b64-b362-482a-9421-0e75b98afbdc/volumes" Jan 21 13:38:35 crc kubenswrapper[4959]: I0121 13:38:35.300664 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df" path="/var/lib/kubelet/pods/dbda7c4c-90f4-49c2-a9d3-e0e839c7a5df/volumes" Jan 21 13:38:40 crc kubenswrapper[4959]: I0121 13:38:40.286879 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:38:40 crc kubenswrapper[4959]: E0121 13:38:40.287713 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:38:43 crc kubenswrapper[4959]: I0121 13:38:43.035286 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2kxzl"] Jan 21 13:38:43 crc kubenswrapper[4959]: I0121 13:38:43.049272 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2kxzl"] Jan 21 13:38:43 crc kubenswrapper[4959]: I0121 13:38:43.296146 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42032608-71eb-41ff-87a3-4a06780169ac" path="/var/lib/kubelet/pods/42032608-71eb-41ff-87a3-4a06780169ac/volumes" Jan 21 13:38:44 crc kubenswrapper[4959]: I0121 13:38:44.029899 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-878b-account-create-update-mmmb5"] Jan 21 13:38:44 crc kubenswrapper[4959]: I0121 13:38:44.038736 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-878b-account-create-update-mmmb5"] Jan 21 13:38:45 crc kubenswrapper[4959]: I0121 13:38:45.303375 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b320e7fa-a18b-46ff-be7b-1753fb60b768" path="/var/lib/kubelet/pods/b320e7fa-a18b-46ff-be7b-1753fb60b768/volumes" Jan 21 13:38:51 crc kubenswrapper[4959]: I0121 13:38:51.285600 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:38:51 crc kubenswrapper[4959]: E0121 13:38:51.286370 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:39:03 crc kubenswrapper[4959]: I0121 13:39:03.296736 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:39:03 crc kubenswrapper[4959]: E0121 13:39:03.297667 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:39:03 crc kubenswrapper[4959]: I0121 13:39:03.305163 4959 generic.go:334] "Generic (PLEG): container finished" podID="6c764408-7cb2-4537-b591-626ea5924406" containerID="81c13dad251e94bac395b7b7c1210b6eff9c3ab41d922083610eb2dbd6e79cc8" exitCode=0 Jan 21 13:39:03 crc kubenswrapper[4959]: I0121 13:39:03.305207 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" event={"ID":"6c764408-7cb2-4537-b591-626ea5924406","Type":"ContainerDied","Data":"81c13dad251e94bac395b7b7c1210b6eff9c3ab41d922083610eb2dbd6e79cc8"} Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.677779 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.824517 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-ssh-key-openstack-edpm-ipam\") pod \"6c764408-7cb2-4537-b591-626ea5924406\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.824567 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-inventory\") pod \"6c764408-7cb2-4537-b591-626ea5924406\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.824632 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-bootstrap-combined-ca-bundle\") pod \"6c764408-7cb2-4537-b591-626ea5924406\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.824654 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfvgx\" (UniqueName: \"kubernetes.io/projected/6c764408-7cb2-4537-b591-626ea5924406-kube-api-access-xfvgx\") pod \"6c764408-7cb2-4537-b591-626ea5924406\" (UID: \"6c764408-7cb2-4537-b591-626ea5924406\") " Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.832081 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c764408-7cb2-4537-b591-626ea5924406-kube-api-access-xfvgx" (OuterVolumeSpecName: "kube-api-access-xfvgx") pod "6c764408-7cb2-4537-b591-626ea5924406" (UID: "6c764408-7cb2-4537-b591-626ea5924406"). InnerVolumeSpecName "kube-api-access-xfvgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.832712 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6c764408-7cb2-4537-b591-626ea5924406" (UID: "6c764408-7cb2-4537-b591-626ea5924406"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.851800 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6c764408-7cb2-4537-b591-626ea5924406" (UID: "6c764408-7cb2-4537-b591-626ea5924406"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.857498 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-inventory" (OuterVolumeSpecName: "inventory") pod "6c764408-7cb2-4537-b591-626ea5924406" (UID: "6c764408-7cb2-4537-b591-626ea5924406"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.926799 4959 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.926826 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfvgx\" (UniqueName: \"kubernetes.io/projected/6c764408-7cb2-4537-b591-626ea5924406-kube-api-access-xfvgx\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.926837 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:04 crc kubenswrapper[4959]: I0121 13:39:04.926845 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c764408-7cb2-4537-b591-626ea5924406-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.326012 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" event={"ID":"6c764408-7cb2-4537-b591-626ea5924406","Type":"ContainerDied","Data":"b91548597cb27b82d3ab5743457151d51a81642a217daeae6627d81383277871"} Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.326058 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b91548597cb27b82d3ab5743457151d51a81642a217daeae6627d81383277871" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.326128 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.410856 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz"] Jan 21 13:39:05 crc kubenswrapper[4959]: E0121 13:39:05.411850 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="extract-content" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.411874 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="extract-content" Jan 21 13:39:05 crc kubenswrapper[4959]: E0121 13:39:05.411885 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="extract-utilities" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.411891 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="extract-utilities" Jan 21 13:39:05 crc kubenswrapper[4959]: E0121 13:39:05.411903 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="registry-server" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.411909 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="registry-server" Jan 21 13:39:05 crc kubenswrapper[4959]: E0121 13:39:05.411939 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c764408-7cb2-4537-b591-626ea5924406" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.411949 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c764408-7cb2-4537-b591-626ea5924406" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.412117 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c764408-7cb2-4537-b591-626ea5924406" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.412146 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5f452c-2af0-4723-95e7-6504d0b616b3" containerName="registry-server" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.412796 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.415985 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.416430 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.418500 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.419593 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.424898 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz"] Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.564131 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.564467 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.564512 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg52q\" (UniqueName: \"kubernetes.io/projected/e069780c-b1ae-4b75-8724-fe682e5a762d-kube-api-access-rg52q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.666994 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg52q\" (UniqueName: \"kubernetes.io/projected/e069780c-b1ae-4b75-8724-fe682e5a762d-kube-api-access-rg52q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.667176 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.667216 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.681701 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.684661 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.687583 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg52q\" (UniqueName: \"kubernetes.io/projected/e069780c-b1ae-4b75-8724-fe682e5a762d-kube-api-access-rg52q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:05 crc kubenswrapper[4959]: I0121 13:39:05.775522 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:39:06 crc kubenswrapper[4959]: I0121 13:39:06.456552 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz"] Jan 21 13:39:06 crc kubenswrapper[4959]: I0121 13:39:06.460568 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:39:07 crc kubenswrapper[4959]: I0121 13:39:07.344590 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" event={"ID":"e069780c-b1ae-4b75-8724-fe682e5a762d","Type":"ContainerStarted","Data":"8c48890c57a0ca7feb632145b173ac8398c4d935ce5faf42de9aa90e1d66dd1c"} Jan 21 13:39:07 crc kubenswrapper[4959]: I0121 13:39:07.344918 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" event={"ID":"e069780c-b1ae-4b75-8724-fe682e5a762d","Type":"ContainerStarted","Data":"378175aa1f2dbaed1ca26733ea85ae8f57230a0059a74bdbd79ac921a864efde"} Jan 21 13:39:11 crc kubenswrapper[4959]: I0121 13:39:11.033023 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" podStartSLOduration=5.595721701 podStartE2EDuration="6.033007811s" podCreationTimestamp="2026-01-21 13:39:05 +0000 UTC" firstStartedPulling="2026-01-21 13:39:06.460301962 +0000 UTC m=+1807.423332505" lastFinishedPulling="2026-01-21 13:39:06.897588072 +0000 UTC m=+1807.860618615" observedRunningTime="2026-01-21 13:39:07.365410661 +0000 UTC m=+1808.328441204" watchObservedRunningTime="2026-01-21 13:39:11.033007811 +0000 UTC m=+1811.996038354" Jan 21 13:39:11 crc kubenswrapper[4959]: I0121 13:39:11.038877 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-79tqr"] Jan 21 13:39:11 crc kubenswrapper[4959]: I0121 13:39:11.051325 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-79tqr"] Jan 21 13:39:11 crc kubenswrapper[4959]: I0121 13:39:11.297080 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8078b737-6203-4e62-92aa-3eb8b3cfa4ed" path="/var/lib/kubelet/pods/8078b737-6203-4e62-92aa-3eb8b3cfa4ed/volumes" Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.035290 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-td4zg"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.044882 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9e31-account-create-update-ddns8"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.055110 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z9994"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.063590 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4ddb-account-create-update-jnfns"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.071440 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z9994"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.078529 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-td4zg"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.085067 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9e31-account-create-update-ddns8"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.091629 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4ddb-account-create-update-jnfns"] Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.299649 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49716e38-daf2-4411-aa24-061680a0bcbd" path="/var/lib/kubelet/pods/49716e38-daf2-4411-aa24-061680a0bcbd/volumes" Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.301778 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626ad741-9147-4903-a245-1728d168ded5" path="/var/lib/kubelet/pods/626ad741-9147-4903-a245-1728d168ded5/volumes" Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.302871 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbc83ca-99db-446e-972f-3cd8831575be" path="/var/lib/kubelet/pods/7fbc83ca-99db-446e-972f-3cd8831575be/volumes" Jan 21 13:39:15 crc kubenswrapper[4959]: I0121 13:39:15.304321 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfba12bc-c2c4-46de-9e89-9887d635f4fb" path="/var/lib/kubelet/pods/dfba12bc-c2c4-46de-9e89-9887d635f4fb/volumes" Jan 21 13:39:16 crc kubenswrapper[4959]: I0121 13:39:16.039847 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5ae-account-create-update-gldcv"] Jan 21 13:39:16 crc kubenswrapper[4959]: I0121 13:39:16.071750 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rmpkw"] Jan 21 13:39:16 crc kubenswrapper[4959]: I0121 13:39:16.081016 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5ae-account-create-update-gldcv"] Jan 21 13:39:16 crc kubenswrapper[4959]: I0121 13:39:16.089337 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rmpkw"] Jan 21 13:39:17 crc kubenswrapper[4959]: I0121 13:39:17.294885 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f2316c-ad55-4f35-9063-82708a99e69f" path="/var/lib/kubelet/pods/47f2316c-ad55-4f35-9063-82708a99e69f/volumes" Jan 21 13:39:17 crc kubenswrapper[4959]: I0121 13:39:17.295620 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f7b344-15e3-4604-bdf0-b40a33752eac" path="/var/lib/kubelet/pods/e8f7b344-15e3-4604-bdf0-b40a33752eac/volumes" Jan 21 13:39:18 crc kubenswrapper[4959]: I0121 13:39:18.286770 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:39:18 crc kubenswrapper[4959]: E0121 13:39:18.287408 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:39:32 crc kubenswrapper[4959]: I0121 13:39:32.286693 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:39:32 crc kubenswrapper[4959]: E0121 13:39:32.287534 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.043275 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8244c"] Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.050488 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8244c"] Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.297150 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbabc2b-28b7-4e5e-b93c-96b9e3060c76" path="/var/lib/kubelet/pods/3fbabc2b-28b7-4e5e-b93c-96b9e3060c76/volumes" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.552074 4959 scope.go:117] "RemoveContainer" containerID="24d637ec626d553e179e45ff176c34dd6f687948fe9ebdfe66fb3470c8e24d8f" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.573379 4959 scope.go:117] "RemoveContainer" containerID="c7e3c67f846e43f396ae424ec508ed3b0511eadefa537abefab06029f9a7ff52" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.619756 4959 scope.go:117] "RemoveContainer" containerID="2c2d34de2043118e4459e2486e63ddb7757ce47fc2bca2db2d6f0773da8ce26c" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.666357 4959 scope.go:117] "RemoveContainer" containerID="0e8c3794ac3e0611513722d6ab8fb02d862d0237755edc59fd8ab8776e9f4712" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.713686 4959 scope.go:117] "RemoveContainer" containerID="3e14f0f27ed1d9bdf769bd92a53fee7e33860ce25510342fa66257c45f6cc066" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.756044 4959 scope.go:117] "RemoveContainer" containerID="98e430745bf400b4c3865e365fd8364e3db8be4b07411ceb7a6f2095fdd5927c" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.797119 4959 scope.go:117] "RemoveContainer" containerID="9e776dbdbc5d96464ac5a830c7d94ada4877595d5a305d4e2044a837a1b45cab" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.820969 4959 scope.go:117] "RemoveContainer" containerID="9dbd450ffa0594e436a80325bb5d9548dcea49385caf37032bac70e54dc74564" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.844080 4959 scope.go:117] "RemoveContainer" containerID="acc29c92913bded6c77c99e55a13ae2a4af53c86c6bf8621bac8a373989ca624" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.861884 4959 scope.go:117] "RemoveContainer" containerID="ad697d6dcefd2ad9205e228d8a5b0d5769bcdad098502f232d1e457733622bc4" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.900622 4959 scope.go:117] "RemoveContainer" containerID="ed458762e363305ae08229ec5d4c586f606198fa1d5d39ced525483442aef9c9" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.922932 4959 scope.go:117] "RemoveContainer" containerID="73390ed88a59e75822991db5f56eff4eff2cf86910e860cd9b3ac570647f5c13" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.943866 4959 scope.go:117] "RemoveContainer" containerID="387cce0b61f4a11b22b5181f73ded596c601b1a626f12e0cf65a89488fd6404f" Jan 21 13:39:33 crc kubenswrapper[4959]: I0121 13:39:33.963477 4959 scope.go:117] "RemoveContainer" containerID="f356c1d615ff799464c9a8b5ae16d0e9b212376c5bad972a1d1f91d336383419" Jan 21 13:39:43 crc kubenswrapper[4959]: I0121 13:39:43.286781 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:39:43 crc kubenswrapper[4959]: E0121 13:39:43.288075 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:39:45 crc kubenswrapper[4959]: I0121 13:39:45.028619 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-s5xcr"] Jan 21 13:39:45 crc kubenswrapper[4959]: I0121 13:39:45.036902 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-s5xcr"] Jan 21 13:39:45 crc kubenswrapper[4959]: I0121 13:39:45.296732 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8abfd29e-86a3-448f-b722-c98d11933e6c" path="/var/lib/kubelet/pods/8abfd29e-86a3-448f-b722-c98d11933e6c/volumes" Jan 21 13:39:57 crc kubenswrapper[4959]: I0121 13:39:57.286415 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:39:57 crc kubenswrapper[4959]: I0121 13:39:57.794238 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"1ee363e4e1583c15674ba6308ac663c5b4e8b3fda56922564a71d99462d29340"} Jan 21 13:40:30 crc kubenswrapper[4959]: I0121 13:40:30.067085 4959 generic.go:334] "Generic (PLEG): container finished" podID="e069780c-b1ae-4b75-8724-fe682e5a762d" containerID="8c48890c57a0ca7feb632145b173ac8398c4d935ce5faf42de9aa90e1d66dd1c" exitCode=0 Jan 21 13:40:30 crc kubenswrapper[4959]: I0121 13:40:30.067246 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" event={"ID":"e069780c-b1ae-4b75-8724-fe682e5a762d","Type":"ContainerDied","Data":"8c48890c57a0ca7feb632145b173ac8398c4d935ce5faf42de9aa90e1d66dd1c"} Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.534124 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.674257 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg52q\" (UniqueName: \"kubernetes.io/projected/e069780c-b1ae-4b75-8724-fe682e5a762d-kube-api-access-rg52q\") pod \"e069780c-b1ae-4b75-8724-fe682e5a762d\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.674304 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-inventory\") pod \"e069780c-b1ae-4b75-8724-fe682e5a762d\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.674522 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-ssh-key-openstack-edpm-ipam\") pod \"e069780c-b1ae-4b75-8724-fe682e5a762d\" (UID: \"e069780c-b1ae-4b75-8724-fe682e5a762d\") " Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.680210 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e069780c-b1ae-4b75-8724-fe682e5a762d-kube-api-access-rg52q" (OuterVolumeSpecName: "kube-api-access-rg52q") pod "e069780c-b1ae-4b75-8724-fe682e5a762d" (UID: "e069780c-b1ae-4b75-8724-fe682e5a762d"). InnerVolumeSpecName "kube-api-access-rg52q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.700751 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e069780c-b1ae-4b75-8724-fe682e5a762d" (UID: "e069780c-b1ae-4b75-8724-fe682e5a762d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.710285 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-inventory" (OuterVolumeSpecName: "inventory") pod "e069780c-b1ae-4b75-8724-fe682e5a762d" (UID: "e069780c-b1ae-4b75-8724-fe682e5a762d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.776838 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg52q\" (UniqueName: \"kubernetes.io/projected/e069780c-b1ae-4b75-8724-fe682e5a762d-kube-api-access-rg52q\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.776958 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:31 crc kubenswrapper[4959]: I0121 13:40:31.776972 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e069780c-b1ae-4b75-8724-fe682e5a762d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.089359 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" event={"ID":"e069780c-b1ae-4b75-8724-fe682e5a762d","Type":"ContainerDied","Data":"378175aa1f2dbaed1ca26733ea85ae8f57230a0059a74bdbd79ac921a864efde"} Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.089604 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378175aa1f2dbaed1ca26733ea85ae8f57230a0059a74bdbd79ac921a864efde" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.089426 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.177736 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj"] Jan 21 13:40:32 crc kubenswrapper[4959]: E0121 13:40:32.178087 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e069780c-b1ae-4b75-8724-fe682e5a762d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.178119 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e069780c-b1ae-4b75-8724-fe682e5a762d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.178312 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e069780c-b1ae-4b75-8724-fe682e5a762d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.179015 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.181888 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.182149 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.182591 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.192061 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.194279 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj"] Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.284991 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.285320 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkb2\" (UniqueName: \"kubernetes.io/projected/74200132-4df6-40f7-a62d-c19984036788-kube-api-access-fzkb2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.285447 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.387909 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.388252 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.388292 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzkb2\" (UniqueName: \"kubernetes.io/projected/74200132-4df6-40f7-a62d-c19984036788-kube-api-access-fzkb2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.396338 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.399537 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.408816 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzkb2\" (UniqueName: \"kubernetes.io/projected/74200132-4df6-40f7-a62d-c19984036788-kube-api-access-fzkb2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:32 crc kubenswrapper[4959]: I0121 13:40:32.497233 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:33 crc kubenswrapper[4959]: I0121 13:40:33.067339 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj"] Jan 21 13:40:33 crc kubenswrapper[4959]: I0121 13:40:33.098923 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" event={"ID":"74200132-4df6-40f7-a62d-c19984036788","Type":"ContainerStarted","Data":"a1c34854d9cadbee64133becde975440d7bf9fee9dbd62ed41e8ec8d75b94b55"} Jan 21 13:40:34 crc kubenswrapper[4959]: I0121 13:40:34.114875 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" event={"ID":"74200132-4df6-40f7-a62d-c19984036788","Type":"ContainerStarted","Data":"00efa2327931af6fbcb3cb4c2bf9ea78dd73e7776612133feee4703af8ee0b70"} Jan 21 13:40:34 crc kubenswrapper[4959]: I0121 13:40:34.140625 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" podStartSLOduration=1.47043445 podStartE2EDuration="2.140606204s" podCreationTimestamp="2026-01-21 13:40:32 +0000 UTC" firstStartedPulling="2026-01-21 13:40:33.080572789 +0000 UTC m=+1894.043603332" lastFinishedPulling="2026-01-21 13:40:33.750744523 +0000 UTC m=+1894.713775086" observedRunningTime="2026-01-21 13:40:34.133163023 +0000 UTC m=+1895.096193616" watchObservedRunningTime="2026-01-21 13:40:34.140606204 +0000 UTC m=+1895.103636747" Jan 21 13:40:34 crc kubenswrapper[4959]: I0121 13:40:34.189579 4959 scope.go:117] "RemoveContainer" containerID="ae60bbe29467cba8cc3a6045563a7709a663d55754abafc9316996dcd8865834" Jan 21 13:40:36 crc kubenswrapper[4959]: I0121 13:40:36.058161 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zdr5r"] Jan 21 13:40:36 crc kubenswrapper[4959]: I0121 13:40:36.069473 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zdr5r"] Jan 21 13:40:37 crc kubenswrapper[4959]: I0121 13:40:37.299537 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c681a0f6-3130-46d9-8a0e-9c27ae2a3171" path="/var/lib/kubelet/pods/c681a0f6-3130-46d9-8a0e-9c27ae2a3171/volumes" Jan 21 13:40:39 crc kubenswrapper[4959]: I0121 13:40:39.303917 4959 generic.go:334] "Generic (PLEG): container finished" podID="74200132-4df6-40f7-a62d-c19984036788" containerID="00efa2327931af6fbcb3cb4c2bf9ea78dd73e7776612133feee4703af8ee0b70" exitCode=0 Jan 21 13:40:39 crc kubenswrapper[4959]: I0121 13:40:39.305396 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" event={"ID":"74200132-4df6-40f7-a62d-c19984036788","Type":"ContainerDied","Data":"00efa2327931af6fbcb3cb4c2bf9ea78dd73e7776612133feee4703af8ee0b70"} Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.701616 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.750163 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-ssh-key-openstack-edpm-ipam\") pod \"74200132-4df6-40f7-a62d-c19984036788\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.750332 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-inventory\") pod \"74200132-4df6-40f7-a62d-c19984036788\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.750454 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzkb2\" (UniqueName: \"kubernetes.io/projected/74200132-4df6-40f7-a62d-c19984036788-kube-api-access-fzkb2\") pod \"74200132-4df6-40f7-a62d-c19984036788\" (UID: \"74200132-4df6-40f7-a62d-c19984036788\") " Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.756813 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74200132-4df6-40f7-a62d-c19984036788-kube-api-access-fzkb2" (OuterVolumeSpecName: "kube-api-access-fzkb2") pod "74200132-4df6-40f7-a62d-c19984036788" (UID: "74200132-4df6-40f7-a62d-c19984036788"). InnerVolumeSpecName "kube-api-access-fzkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.777149 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74200132-4df6-40f7-a62d-c19984036788" (UID: "74200132-4df6-40f7-a62d-c19984036788"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.779834 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-inventory" (OuterVolumeSpecName: "inventory") pod "74200132-4df6-40f7-a62d-c19984036788" (UID: "74200132-4df6-40f7-a62d-c19984036788"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.851835 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.851875 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzkb2\" (UniqueName: \"kubernetes.io/projected/74200132-4df6-40f7-a62d-c19984036788-kube-api-access-fzkb2\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:40 crc kubenswrapper[4959]: I0121 13:40:40.851890 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74200132-4df6-40f7-a62d-c19984036788-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.322230 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" event={"ID":"74200132-4df6-40f7-a62d-c19984036788","Type":"ContainerDied","Data":"a1c34854d9cadbee64133becde975440d7bf9fee9dbd62ed41e8ec8d75b94b55"} Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.322522 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c34854d9cadbee64133becde975440d7bf9fee9dbd62ed41e8ec8d75b94b55" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.322278 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.403251 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb"] Jan 21 13:40:41 crc kubenswrapper[4959]: E0121 13:40:41.403786 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74200132-4df6-40f7-a62d-c19984036788" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.403815 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="74200132-4df6-40f7-a62d-c19984036788" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.404126 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="74200132-4df6-40f7-a62d-c19984036788" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.404863 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.409050 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.410462 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.410762 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.411459 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.419987 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb"] Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.460579 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.460729 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffjp\" (UniqueName: \"kubernetes.io/projected/b3837c43-f9b9-4d7f-80a2-26b582090af2-kube-api-access-gffjp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.460795 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.562666 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.562766 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.562924 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffjp\" (UniqueName: \"kubernetes.io/projected/b3837c43-f9b9-4d7f-80a2-26b582090af2-kube-api-access-gffjp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.566963 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.568275 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.591555 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffjp\" (UniqueName: \"kubernetes.io/projected/b3837c43-f9b9-4d7f-80a2-26b582090af2-kube-api-access-gffjp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5s7cb\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:41 crc kubenswrapper[4959]: I0121 13:40:41.720697 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:40:42 crc kubenswrapper[4959]: I0121 13:40:42.241817 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb"] Jan 21 13:40:42 crc kubenswrapper[4959]: I0121 13:40:42.330838 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" event={"ID":"b3837c43-f9b9-4d7f-80a2-26b582090af2","Type":"ContainerStarted","Data":"b8f7272900a5ba4e141430b78167419849bcbd9d611b888b5d24bbc8210511da"} Jan 21 13:40:43 crc kubenswrapper[4959]: I0121 13:40:43.339275 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" event={"ID":"b3837c43-f9b9-4d7f-80a2-26b582090af2","Type":"ContainerStarted","Data":"59f8021206f94808eda01634fadeef263d6fbe0450cfc0526768e11f83f749ea"} Jan 21 13:40:43 crc kubenswrapper[4959]: I0121 13:40:43.356670 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" podStartSLOduration=1.9380147779999999 podStartE2EDuration="2.356652227s" podCreationTimestamp="2026-01-21 13:40:41 +0000 UTC" firstStartedPulling="2026-01-21 13:40:42.244958298 +0000 UTC m=+1903.207988841" lastFinishedPulling="2026-01-21 13:40:42.663595737 +0000 UTC m=+1903.626626290" observedRunningTime="2026-01-21 13:40:43.354114489 +0000 UTC m=+1904.317145042" watchObservedRunningTime="2026-01-21 13:40:43.356652227 +0000 UTC m=+1904.319682770" Jan 21 13:40:53 crc kubenswrapper[4959]: I0121 13:40:53.040959 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-29kjv"] Jan 21 13:40:53 crc kubenswrapper[4959]: I0121 13:40:53.051891 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-29kjv"] Jan 21 13:40:53 crc kubenswrapper[4959]: I0121 13:40:53.300073 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbc182d-b589-4402-a7a4-e18453424630" path="/var/lib/kubelet/pods/1bbc182d-b589-4402-a7a4-e18453424630/volumes" Jan 21 13:40:54 crc kubenswrapper[4959]: I0121 13:40:54.037496 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-r9zsv"] Jan 21 13:40:54 crc kubenswrapper[4959]: I0121 13:40:54.049359 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-r9zsv"] Jan 21 13:40:55 crc kubenswrapper[4959]: I0121 13:40:55.298037 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0582f964-2c47-48a3-9bba-6f169f2a32c8" path="/var/lib/kubelet/pods/0582f964-2c47-48a3-9bba-6f169f2a32c8/volumes" Jan 21 13:41:00 crc kubenswrapper[4959]: I0121 13:41:00.048159 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kbs79"] Jan 21 13:41:00 crc kubenswrapper[4959]: I0121 13:41:00.058417 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kbs79"] Jan 21 13:41:01 crc kubenswrapper[4959]: I0121 13:41:01.298232 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8172e9-2396-4f6b-a632-8e32400aea67" path="/var/lib/kubelet/pods/ac8172e9-2396-4f6b-a632-8e32400aea67/volumes" Jan 21 13:41:02 crc kubenswrapper[4959]: I0121 13:41:02.029475 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vl9c7"] Jan 21 13:41:02 crc kubenswrapper[4959]: I0121 13:41:02.037207 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vl9c7"] Jan 21 13:41:03 crc kubenswrapper[4959]: I0121 13:41:03.297889 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431f411c-8ae5-42e7-b76a-4ca21314112a" path="/var/lib/kubelet/pods/431f411c-8ae5-42e7-b76a-4ca21314112a/volumes" Jan 21 13:41:22 crc kubenswrapper[4959]: I0121 13:41:22.690243 4959 generic.go:334] "Generic (PLEG): container finished" podID="b3837c43-f9b9-4d7f-80a2-26b582090af2" containerID="59f8021206f94808eda01634fadeef263d6fbe0450cfc0526768e11f83f749ea" exitCode=0 Jan 21 13:41:22 crc kubenswrapper[4959]: I0121 13:41:22.690590 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" event={"ID":"b3837c43-f9b9-4d7f-80a2-26b582090af2","Type":"ContainerDied","Data":"59f8021206f94808eda01634fadeef263d6fbe0450cfc0526768e11f83f749ea"} Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.128655 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.187277 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-inventory\") pod \"b3837c43-f9b9-4d7f-80a2-26b582090af2\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.187407 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-ssh-key-openstack-edpm-ipam\") pod \"b3837c43-f9b9-4d7f-80a2-26b582090af2\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.187485 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gffjp\" (UniqueName: \"kubernetes.io/projected/b3837c43-f9b9-4d7f-80a2-26b582090af2-kube-api-access-gffjp\") pod \"b3837c43-f9b9-4d7f-80a2-26b582090af2\" (UID: \"b3837c43-f9b9-4d7f-80a2-26b582090af2\") " Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.195349 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3837c43-f9b9-4d7f-80a2-26b582090af2-kube-api-access-gffjp" (OuterVolumeSpecName: "kube-api-access-gffjp") pod "b3837c43-f9b9-4d7f-80a2-26b582090af2" (UID: "b3837c43-f9b9-4d7f-80a2-26b582090af2"). InnerVolumeSpecName "kube-api-access-gffjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.215490 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3837c43-f9b9-4d7f-80a2-26b582090af2" (UID: "b3837c43-f9b9-4d7f-80a2-26b582090af2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.218840 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-inventory" (OuterVolumeSpecName: "inventory") pod "b3837c43-f9b9-4d7f-80a2-26b582090af2" (UID: "b3837c43-f9b9-4d7f-80a2-26b582090af2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.290025 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.290069 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3837c43-f9b9-4d7f-80a2-26b582090af2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.290084 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gffjp\" (UniqueName: \"kubernetes.io/projected/b3837c43-f9b9-4d7f-80a2-26b582090af2-kube-api-access-gffjp\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.723496 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" event={"ID":"b3837c43-f9b9-4d7f-80a2-26b582090af2","Type":"ContainerDied","Data":"b8f7272900a5ba4e141430b78167419849bcbd9d611b888b5d24bbc8210511da"} Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.723867 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f7272900a5ba4e141430b78167419849bcbd9d611b888b5d24bbc8210511da" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.723542 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.788300 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps"] Jan 21 13:41:24 crc kubenswrapper[4959]: E0121 13:41:24.788691 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3837c43-f9b9-4d7f-80a2-26b582090af2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.788720 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3837c43-f9b9-4d7f-80a2-26b582090af2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.788911 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3837c43-f9b9-4d7f-80a2-26b582090af2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.789688 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.793230 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.793355 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.793626 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.799852 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.800163 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps"] Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.899963 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m86t\" (UniqueName: \"kubernetes.io/projected/e324e7a3-6b8a-4c28-a870-c03a9f772439-kube-api-access-7m86t\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.900015 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:24 crc kubenswrapper[4959]: I0121 13:41:24.900484 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.002715 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.002827 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m86t\" (UniqueName: \"kubernetes.io/projected/e324e7a3-6b8a-4c28-a870-c03a9f772439-kube-api-access-7m86t\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.002858 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.007754 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.016393 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.019061 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m86t\" (UniqueName: \"kubernetes.io/projected/e324e7a3-6b8a-4c28-a870-c03a9f772439-kube-api-access-7m86t\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:25 crc kubenswrapper[4959]: I0121 13:41:25.107039 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:26 crc kubenswrapper[4959]: I0121 13:41:26.167891 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps"] Jan 21 13:41:26 crc kubenswrapper[4959]: I0121 13:41:26.741194 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" event={"ID":"e324e7a3-6b8a-4c28-a870-c03a9f772439","Type":"ContainerStarted","Data":"385925e0c218aa42474009ed3159645bf291da9a0d0346cec4b9788ba098ef27"} Jan 21 13:41:27 crc kubenswrapper[4959]: I0121 13:41:27.753834 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" event={"ID":"e324e7a3-6b8a-4c28-a870-c03a9f772439","Type":"ContainerStarted","Data":"a4f962ff3b56d26d979ed8e38e9942deb73fcf868f4d0c54e3b31364ec7c1dc2"} Jan 21 13:41:27 crc kubenswrapper[4959]: I0121 13:41:27.774955 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" podStartSLOduration=3.389957116 podStartE2EDuration="3.774930029s" podCreationTimestamp="2026-01-21 13:41:24 +0000 UTC" firstStartedPulling="2026-01-21 13:41:26.170020599 +0000 UTC m=+1947.133051142" lastFinishedPulling="2026-01-21 13:41:26.554993522 +0000 UTC m=+1947.518024055" observedRunningTime="2026-01-21 13:41:27.771798915 +0000 UTC m=+1948.734829458" watchObservedRunningTime="2026-01-21 13:41:27.774930029 +0000 UTC m=+1948.737960582" Jan 21 13:41:30 crc kubenswrapper[4959]: I0121 13:41:30.785991 4959 generic.go:334] "Generic (PLEG): container finished" podID="e324e7a3-6b8a-4c28-a870-c03a9f772439" containerID="a4f962ff3b56d26d979ed8e38e9942deb73fcf868f4d0c54e3b31364ec7c1dc2" exitCode=0 Jan 21 13:41:30 crc kubenswrapper[4959]: I0121 13:41:30.786064 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" event={"ID":"e324e7a3-6b8a-4c28-a870-c03a9f772439","Type":"ContainerDied","Data":"a4f962ff3b56d26d979ed8e38e9942deb73fcf868f4d0c54e3b31364ec7c1dc2"} Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.188591 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.323648 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-ssh-key-openstack-edpm-ipam\") pod \"e324e7a3-6b8a-4c28-a870-c03a9f772439\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.323765 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-inventory\") pod \"e324e7a3-6b8a-4c28-a870-c03a9f772439\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.323838 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m86t\" (UniqueName: \"kubernetes.io/projected/e324e7a3-6b8a-4c28-a870-c03a9f772439-kube-api-access-7m86t\") pod \"e324e7a3-6b8a-4c28-a870-c03a9f772439\" (UID: \"e324e7a3-6b8a-4c28-a870-c03a9f772439\") " Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.329634 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e324e7a3-6b8a-4c28-a870-c03a9f772439-kube-api-access-7m86t" (OuterVolumeSpecName: "kube-api-access-7m86t") pod "e324e7a3-6b8a-4c28-a870-c03a9f772439" (UID: "e324e7a3-6b8a-4c28-a870-c03a9f772439"). InnerVolumeSpecName "kube-api-access-7m86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.350121 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-inventory" (OuterVolumeSpecName: "inventory") pod "e324e7a3-6b8a-4c28-a870-c03a9f772439" (UID: "e324e7a3-6b8a-4c28-a870-c03a9f772439"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.350653 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e324e7a3-6b8a-4c28-a870-c03a9f772439" (UID: "e324e7a3-6b8a-4c28-a870-c03a9f772439"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.427191 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m86t\" (UniqueName: \"kubernetes.io/projected/e324e7a3-6b8a-4c28-a870-c03a9f772439-kube-api-access-7m86t\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.427222 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.427234 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324e7a3-6b8a-4c28-a870-c03a9f772439-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.803117 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" event={"ID":"e324e7a3-6b8a-4c28-a870-c03a9f772439","Type":"ContainerDied","Data":"385925e0c218aa42474009ed3159645bf291da9a0d0346cec4b9788ba098ef27"} Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.803163 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385925e0c218aa42474009ed3159645bf291da9a0d0346cec4b9788ba098ef27" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.803171 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.901769 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2"] Jan 21 13:41:32 crc kubenswrapper[4959]: E0121 13:41:32.902286 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e324e7a3-6b8a-4c28-a870-c03a9f772439" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.902311 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e324e7a3-6b8a-4c28-a870-c03a9f772439" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.902577 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e324e7a3-6b8a-4c28-a870-c03a9f772439" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.903632 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.906807 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.907491 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.908062 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.908245 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:41:32 crc kubenswrapper[4959]: I0121 13:41:32.914948 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2"] Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.037309 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.037926 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.038020 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dlqn\" (UniqueName: \"kubernetes.io/projected/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-kube-api-access-7dlqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.139697 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.139813 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.139851 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dlqn\" (UniqueName: \"kubernetes.io/projected/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-kube-api-access-7dlqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.145157 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.147737 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.173318 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dlqn\" (UniqueName: \"kubernetes.io/projected/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-kube-api-access-7dlqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.230328 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.725061 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2"] Jan 21 13:41:33 crc kubenswrapper[4959]: I0121 13:41:33.813872 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" event={"ID":"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c","Type":"ContainerStarted","Data":"8eda78b028043b82f2431c9eda123e713f12677afee767e74011048bbeb16b77"} Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.265395 4959 scope.go:117] "RemoveContainer" containerID="b111740e5beee284477df938d7c9bd06c39614e4c65951e34b440230ae99bf60" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.293936 4959 scope.go:117] "RemoveContainer" containerID="dcafc570377a5e7f90897c01c08185d803b8c72c10f1127baab477c9febb0df6" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.368018 4959 scope.go:117] "RemoveContainer" containerID="aaf2c20d5f88e6c0176741dbdbdbc189aef21b9770aa1f568956ea31c23f8c13" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.387983 4959 scope.go:117] "RemoveContainer" containerID="d4f2eee4ac0b3985c4aa47ece6bc98dc713f27db40b41de0b80d6acdf1cbdc3d" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.454773 4959 scope.go:117] "RemoveContainer" containerID="f642c2efc8d06b6adc3ed81dbe30f3aeb2f2cf8ce3c41085335416c3d4190068" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.474067 4959 scope.go:117] "RemoveContainer" containerID="4422224cf5f417b5d2257f2e2bb002a11b30e2bc8d21daa03c7421455d557f80" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.496266 4959 scope.go:117] "RemoveContainer" containerID="5b9785d89d6e9a55db27e4ad7e6338f1cec9a56a48e1253ab549abd792c4f303" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.514415 4959 scope.go:117] "RemoveContainer" containerID="6752eaa93b9b2ea81f03375078c6d56e8911e09dc9579ec6488b009dbf950df8" Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.822343 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" event={"ID":"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c","Type":"ContainerStarted","Data":"0e981c57a0b8f29ef0dcaa90f451747f609e37dbc0b4e6084782ef92f5228288"} Jan 21 13:41:34 crc kubenswrapper[4959]: I0121 13:41:34.846797 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" podStartSLOduration=2.460988783 podStartE2EDuration="2.846778355s" podCreationTimestamp="2026-01-21 13:41:32 +0000 UTC" firstStartedPulling="2026-01-21 13:41:33.72990246 +0000 UTC m=+1954.692933003" lastFinishedPulling="2026-01-21 13:41:34.115692032 +0000 UTC m=+1955.078722575" observedRunningTime="2026-01-21 13:41:34.841201104 +0000 UTC m=+1955.804231657" watchObservedRunningTime="2026-01-21 13:41:34.846778355 +0000 UTC m=+1955.809808898" Jan 21 13:41:47 crc kubenswrapper[4959]: I0121 13:41:47.043585 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-54c0-account-create-update-6sn5l"] Jan 21 13:41:47 crc kubenswrapper[4959]: I0121 13:41:47.050178 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mtgkn"] Jan 21 13:41:47 crc kubenswrapper[4959]: I0121 13:41:47.057131 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-54c0-account-create-update-6sn5l"] Jan 21 13:41:47 crc kubenswrapper[4959]: I0121 13:41:47.065000 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mtgkn"] Jan 21 13:41:47 crc kubenswrapper[4959]: I0121 13:41:47.296671 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9485d7a3-a28a-47b1-b949-197adac8d89c" path="/var/lib/kubelet/pods/9485d7a3-a28a-47b1-b949-197adac8d89c/volumes" Jan 21 13:41:47 crc kubenswrapper[4959]: I0121 13:41:47.297300 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc282d4-ce0c-40cc-be72-407c31c0effe" path="/var/lib/kubelet/pods/bdc282d4-ce0c-40cc-be72-407c31c0effe/volumes" Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.043301 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7r8zr"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.057627 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bmtxq"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.065924 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ca0c-account-create-update-58lql"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.071963 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7r8zr"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.077868 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bmtxq"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.088186 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-36a4-account-create-update-czbvk"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.095920 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ca0c-account-create-update-58lql"] Jan 21 13:41:48 crc kubenswrapper[4959]: I0121 13:41:48.104959 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-36a4-account-create-update-czbvk"] Jan 21 13:41:49 crc kubenswrapper[4959]: I0121 13:41:49.295950 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f15817e-88bb-4e43-afd3-024e73dd60f5" path="/var/lib/kubelet/pods/2f15817e-88bb-4e43-afd3-024e73dd60f5/volumes" Jan 21 13:41:49 crc kubenswrapper[4959]: I0121 13:41:49.296847 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae" path="/var/lib/kubelet/pods/458c6ea3-bcb1-4e4b-8a2a-0fc1af49a8ae/volumes" Jan 21 13:41:49 crc kubenswrapper[4959]: I0121 13:41:49.297465 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff28e6a-1072-4f12-8b90-e62346cdcf59" path="/var/lib/kubelet/pods/5ff28e6a-1072-4f12-8b90-e62346cdcf59/volumes" Jan 21 13:41:49 crc kubenswrapper[4959]: I0121 13:41:49.298072 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d874dcfc-b5e9-42a8-bde8-fe6d8e998af0" path="/var/lib/kubelet/pods/d874dcfc-b5e9-42a8-bde8-fe6d8e998af0/volumes" Jan 21 13:42:16 crc kubenswrapper[4959]: I0121 13:42:16.044138 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d6rms"] Jan 21 13:42:16 crc kubenswrapper[4959]: I0121 13:42:16.055081 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-d6rms"] Jan 21 13:42:17 crc kubenswrapper[4959]: I0121 13:42:17.296791 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2a88cc-56f3-435c-9d5c-b64a38fc25b8" path="/var/lib/kubelet/pods/3c2a88cc-56f3-435c-9d5c-b64a38fc25b8/volumes" Jan 21 13:42:21 crc kubenswrapper[4959]: I0121 13:42:21.380037 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:42:21 crc kubenswrapper[4959]: I0121 13:42:21.380614 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:42:28 crc kubenswrapper[4959]: I0121 13:42:28.291592 4959 generic.go:334] "Generic (PLEG): container finished" podID="f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" containerID="0e981c57a0b8f29ef0dcaa90f451747f609e37dbc0b4e6084782ef92f5228288" exitCode=0 Jan 21 13:42:28 crc kubenswrapper[4959]: I0121 13:42:28.291769 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" event={"ID":"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c","Type":"ContainerDied","Data":"0e981c57a0b8f29ef0dcaa90f451747f609e37dbc0b4e6084782ef92f5228288"} Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.753619 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.843449 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dlqn\" (UniqueName: \"kubernetes.io/projected/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-kube-api-access-7dlqn\") pod \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.843538 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-inventory\") pod \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.843635 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-ssh-key-openstack-edpm-ipam\") pod \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\" (UID: \"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c\") " Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.851815 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-kube-api-access-7dlqn" (OuterVolumeSpecName: "kube-api-access-7dlqn") pod "f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" (UID: "f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c"). InnerVolumeSpecName "kube-api-access-7dlqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.875242 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" (UID: "f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.877921 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-inventory" (OuterVolumeSpecName: "inventory") pod "f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" (UID: "f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.945738 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.945779 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dlqn\" (UniqueName: \"kubernetes.io/projected/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-kube-api-access-7dlqn\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:29 crc kubenswrapper[4959]: I0121 13:42:29.945788 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.308448 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" event={"ID":"f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c","Type":"ContainerDied","Data":"8eda78b028043b82f2431c9eda123e713f12677afee767e74011048bbeb16b77"} Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.308494 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eda78b028043b82f2431c9eda123e713f12677afee767e74011048bbeb16b77" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.308491 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.398254 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t89bk"] Jan 21 13:42:30 crc kubenswrapper[4959]: E0121 13:42:30.398619 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.398642 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.398852 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.399554 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.401818 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.402081 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.402208 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.402326 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.419154 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t89bk"] Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.453988 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxcn8\" (UniqueName: \"kubernetes.io/projected/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-kube-api-access-sxcn8\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.454078 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.454208 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.556546 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxcn8\" (UniqueName: \"kubernetes.io/projected/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-kube-api-access-sxcn8\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.557024 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.557169 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.566793 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.575598 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.581901 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxcn8\" (UniqueName: \"kubernetes.io/projected/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-kube-api-access-sxcn8\") pod \"ssh-known-hosts-edpm-deployment-t89bk\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:30 crc kubenswrapper[4959]: I0121 13:42:30.715288 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:31 crc kubenswrapper[4959]: I0121 13:42:31.198005 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t89bk"] Jan 21 13:42:31 crc kubenswrapper[4959]: I0121 13:42:31.316512 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" event={"ID":"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5","Type":"ContainerStarted","Data":"7a880df6cbbc69d6ffea29b088dd96ea4a47f82864f269d4d44642c1f920526b"} Jan 21 13:42:32 crc kubenswrapper[4959]: I0121 13:42:32.324960 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" event={"ID":"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5","Type":"ContainerStarted","Data":"88697ecdff62c0d131893c2400a5fa89d581e85de411bfd4a0a5fba867bcf42b"} Jan 21 13:42:32 crc kubenswrapper[4959]: I0121 13:42:32.350471 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" podStartSLOduration=1.9483134720000002 podStartE2EDuration="2.350453092s" podCreationTimestamp="2026-01-21 13:42:30 +0000 UTC" firstStartedPulling="2026-01-21 13:42:31.205927123 +0000 UTC m=+2012.168957666" lastFinishedPulling="2026-01-21 13:42:31.608066743 +0000 UTC m=+2012.571097286" observedRunningTime="2026-01-21 13:42:32.343797121 +0000 UTC m=+2013.306827664" watchObservedRunningTime="2026-01-21 13:42:32.350453092 +0000 UTC m=+2013.313483635" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.647123 4959 scope.go:117] "RemoveContainer" containerID="4d9827f4fd4fada3519d3cd2c23f7aed651e676e6e0f76bd7bce8e29a0832956" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.698457 4959 scope.go:117] "RemoveContainer" containerID="9ce6d7d0a4d4d15cd6962602c402fb6e7cb0c3a26a30cc76c5fbc7ab54331077" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.728476 4959 scope.go:117] "RemoveContainer" containerID="d82362bd7af4309c93acccac6858de816a4168c87efe0f3b9699c24ef54e6efd" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.768034 4959 scope.go:117] "RemoveContainer" containerID="5875645ac4137f6ee5e40bf3414e203b8181fa3c05e36df4b548a2cf26fd40eb" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.853558 4959 scope.go:117] "RemoveContainer" containerID="82dac5a2a74317ccb05e0df4c5513e86e7a281c83e0d8862107e3d318256e80b" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.883433 4959 scope.go:117] "RemoveContainer" containerID="bd60b0b69531fbf2908d2bf78d673c63403831348f233c7b9ad5c093184929cd" Jan 21 13:42:34 crc kubenswrapper[4959]: I0121 13:42:34.924981 4959 scope.go:117] "RemoveContainer" containerID="4853647e155c4126a5ca3cb8cd41d5182a2d27344cf1a3ee2f3b722b0f5601ae" Jan 21 13:42:36 crc kubenswrapper[4959]: I0121 13:42:36.066958 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-j649g"] Jan 21 13:42:36 crc kubenswrapper[4959]: I0121 13:42:36.081614 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-j649g"] Jan 21 13:42:37 crc kubenswrapper[4959]: I0121 13:42:37.306454 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8e32a2-8454-4269-a372-96d891edcdda" path="/var/lib/kubelet/pods/4a8e32a2-8454-4269-a372-96d891edcdda/volumes" Jan 21 13:42:39 crc kubenswrapper[4959]: I0121 13:42:39.416526 4959 generic.go:334] "Generic (PLEG): container finished" podID="8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" containerID="88697ecdff62c0d131893c2400a5fa89d581e85de411bfd4a0a5fba867bcf42b" exitCode=0 Jan 21 13:42:39 crc kubenswrapper[4959]: I0121 13:42:39.416803 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" event={"ID":"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5","Type":"ContainerDied","Data":"88697ecdff62c0d131893c2400a5fa89d581e85de411bfd4a0a5fba867bcf42b"} Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.798138 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.878732 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxcn8\" (UniqueName: \"kubernetes.io/projected/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-kube-api-access-sxcn8\") pod \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.878794 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-inventory-0\") pod \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.878866 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-ssh-key-openstack-edpm-ipam\") pod \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\" (UID: \"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5\") " Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.885389 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-kube-api-access-sxcn8" (OuterVolumeSpecName: "kube-api-access-sxcn8") pod "8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" (UID: "8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5"). InnerVolumeSpecName "kube-api-access-sxcn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.904471 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" (UID: "8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.904500 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" (UID: "8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.981410 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxcn8\" (UniqueName: \"kubernetes.io/projected/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-kube-api-access-sxcn8\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.981484 4959 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:40 crc kubenswrapper[4959]: I0121 13:42:40.981501 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.432000 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" event={"ID":"8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5","Type":"ContainerDied","Data":"7a880df6cbbc69d6ffea29b088dd96ea4a47f82864f269d4d44642c1f920526b"} Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.432447 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a880df6cbbc69d6ffea29b088dd96ea4a47f82864f269d4d44642c1f920526b" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.432133 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t89bk" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.516856 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2"] Jan 21 13:42:41 crc kubenswrapper[4959]: E0121 13:42:41.517247 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.517266 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.517443 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.518021 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.519950 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.522458 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.522507 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.522789 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.527121 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2"] Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.606437 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.606747 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfpc\" (UniqueName: \"kubernetes.io/projected/b5caad2c-f675-4a5f-8c5d-c711444ce2de-kube-api-access-bmfpc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.606839 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.708491 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmfpc\" (UniqueName: \"kubernetes.io/projected/b5caad2c-f675-4a5f-8c5d-c711444ce2de-kube-api-access-bmfpc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.708552 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.708605 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.713671 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.713770 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.728533 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmfpc\" (UniqueName: \"kubernetes.io/projected/b5caad2c-f675-4a5f-8c5d-c711444ce2de-kube-api-access-bmfpc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5pbh2\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:41 crc kubenswrapper[4959]: I0121 13:42:41.835943 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:42 crc kubenswrapper[4959]: I0121 13:42:42.040227 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s4jcm"] Jan 21 13:42:42 crc kubenswrapper[4959]: I0121 13:42:42.050388 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s4jcm"] Jan 21 13:42:42 crc kubenswrapper[4959]: I0121 13:42:42.418252 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2"] Jan 21 13:42:42 crc kubenswrapper[4959]: I0121 13:42:42.441899 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" event={"ID":"b5caad2c-f675-4a5f-8c5d-c711444ce2de","Type":"ContainerStarted","Data":"8d6a845b41d4b81f3fe5e9811e9d46f63bd40c7a7ec1a7f70d2c06ca06008ab3"} Jan 21 13:42:43 crc kubenswrapper[4959]: I0121 13:42:43.296746 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5c2306-fe9b-475e-8b0e-9ecf06f69050" path="/var/lib/kubelet/pods/aa5c2306-fe9b-475e-8b0e-9ecf06f69050/volumes" Jan 21 13:42:43 crc kubenswrapper[4959]: I0121 13:42:43.452294 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" event={"ID":"b5caad2c-f675-4a5f-8c5d-c711444ce2de","Type":"ContainerStarted","Data":"79be8b58e01794df661a2d9ee329565b3dc98f206dd3e73a7bc302622d12de52"} Jan 21 13:42:43 crc kubenswrapper[4959]: I0121 13:42:43.525532 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" podStartSLOduration=2.080947859 podStartE2EDuration="2.525509649s" podCreationTimestamp="2026-01-21 13:42:41 +0000 UTC" firstStartedPulling="2026-01-21 13:42:42.422632218 +0000 UTC m=+2023.385662761" lastFinishedPulling="2026-01-21 13:42:42.867194018 +0000 UTC m=+2023.830224551" observedRunningTime="2026-01-21 13:42:43.518858607 +0000 UTC m=+2024.481889150" watchObservedRunningTime="2026-01-21 13:42:43.525509649 +0000 UTC m=+2024.488540192" Jan 21 13:42:51 crc kubenswrapper[4959]: I0121 13:42:51.380214 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:42:51 crc kubenswrapper[4959]: I0121 13:42:51.380788 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:42:51 crc kubenswrapper[4959]: I0121 13:42:51.549178 4959 generic.go:334] "Generic (PLEG): container finished" podID="b5caad2c-f675-4a5f-8c5d-c711444ce2de" containerID="79be8b58e01794df661a2d9ee329565b3dc98f206dd3e73a7bc302622d12de52" exitCode=0 Jan 21 13:42:51 crc kubenswrapper[4959]: I0121 13:42:51.549237 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" event={"ID":"b5caad2c-f675-4a5f-8c5d-c711444ce2de","Type":"ContainerDied","Data":"79be8b58e01794df661a2d9ee329565b3dc98f206dd3e73a7bc302622d12de52"} Jan 21 13:42:52 crc kubenswrapper[4959]: I0121 13:42:52.953890 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.081473 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-ssh-key-openstack-edpm-ipam\") pod \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.081871 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmfpc\" (UniqueName: \"kubernetes.io/projected/b5caad2c-f675-4a5f-8c5d-c711444ce2de-kube-api-access-bmfpc\") pod \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.081980 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-inventory\") pod \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\" (UID: \"b5caad2c-f675-4a5f-8c5d-c711444ce2de\") " Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.091356 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5caad2c-f675-4a5f-8c5d-c711444ce2de-kube-api-access-bmfpc" (OuterVolumeSpecName: "kube-api-access-bmfpc") pod "b5caad2c-f675-4a5f-8c5d-c711444ce2de" (UID: "b5caad2c-f675-4a5f-8c5d-c711444ce2de"). InnerVolumeSpecName "kube-api-access-bmfpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.106969 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-inventory" (OuterVolumeSpecName: "inventory") pod "b5caad2c-f675-4a5f-8c5d-c711444ce2de" (UID: "b5caad2c-f675-4a5f-8c5d-c711444ce2de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.108481 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5caad2c-f675-4a5f-8c5d-c711444ce2de" (UID: "b5caad2c-f675-4a5f-8c5d-c711444ce2de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.185545 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.185594 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmfpc\" (UniqueName: \"kubernetes.io/projected/b5caad2c-f675-4a5f-8c5d-c711444ce2de-kube-api-access-bmfpc\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.185606 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5caad2c-f675-4a5f-8c5d-c711444ce2de-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.568050 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" event={"ID":"b5caad2c-f675-4a5f-8c5d-c711444ce2de","Type":"ContainerDied","Data":"8d6a845b41d4b81f3fe5e9811e9d46f63bd40c7a7ec1a7f70d2c06ca06008ab3"} Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.568114 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6a845b41d4b81f3fe5e9811e9d46f63bd40c7a7ec1a7f70d2c06ca06008ab3" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.568154 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.633868 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v"] Jan 21 13:42:53 crc kubenswrapper[4959]: E0121 13:42:53.634337 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5caad2c-f675-4a5f-8c5d-c711444ce2de" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.634364 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5caad2c-f675-4a5f-8c5d-c711444ce2de" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.634589 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5caad2c-f675-4a5f-8c5d-c711444ce2de" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.635342 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.638144 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.638384 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.639033 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.639867 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.650937 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v"] Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.722743 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.722858 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9h8\" (UniqueName: \"kubernetes.io/projected/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-kube-api-access-mb9h8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.722943 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.824485 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.824598 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9h8\" (UniqueName: \"kubernetes.io/projected/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-kube-api-access-mb9h8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.824690 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.828213 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.828234 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.844256 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9h8\" (UniqueName: \"kubernetes.io/projected/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-kube-api-access-mb9h8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:53 crc kubenswrapper[4959]: I0121 13:42:53.961223 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:42:54 crc kubenswrapper[4959]: I0121 13:42:54.504082 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v"] Jan 21 13:42:54 crc kubenswrapper[4959]: I0121 13:42:54.577995 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" event={"ID":"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf","Type":"ContainerStarted","Data":"05e7992ffefc5605296e80cf34788b5112e7cee4a4a9cbda3dc6332e0e17029b"} Jan 21 13:42:55 crc kubenswrapper[4959]: I0121 13:42:55.587159 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" event={"ID":"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf","Type":"ContainerStarted","Data":"92870f3c5a86654d02e09fa9f80358743dfa356a764971ac281967169d6d5de4"} Jan 21 13:42:55 crc kubenswrapper[4959]: I0121 13:42:55.605416 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" podStartSLOduration=2.184546209 podStartE2EDuration="2.605396281s" podCreationTimestamp="2026-01-21 13:42:53 +0000 UTC" firstStartedPulling="2026-01-21 13:42:54.506481969 +0000 UTC m=+2035.469512512" lastFinishedPulling="2026-01-21 13:42:54.927332041 +0000 UTC m=+2035.890362584" observedRunningTime="2026-01-21 13:42:55.599657414 +0000 UTC m=+2036.562687977" watchObservedRunningTime="2026-01-21 13:42:55.605396281 +0000 UTC m=+2036.568426824" Jan 21 13:43:05 crc kubenswrapper[4959]: I0121 13:43:05.689436 4959 generic.go:334] "Generic (PLEG): container finished" podID="91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" containerID="92870f3c5a86654d02e09fa9f80358743dfa356a764971ac281967169d6d5de4" exitCode=0 Jan 21 13:43:05 crc kubenswrapper[4959]: I0121 13:43:05.689529 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" event={"ID":"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf","Type":"ContainerDied","Data":"92870f3c5a86654d02e09fa9f80358743dfa356a764971ac281967169d6d5de4"} Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.149454 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.194260 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-inventory\") pod \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.194610 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-ssh-key-openstack-edpm-ipam\") pod \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.194660 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9h8\" (UniqueName: \"kubernetes.io/projected/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-kube-api-access-mb9h8\") pod \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\" (UID: \"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf\") " Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.205421 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-kube-api-access-mb9h8" (OuterVolumeSpecName: "kube-api-access-mb9h8") pod "91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" (UID: "91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf"). InnerVolumeSpecName "kube-api-access-mb9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.230513 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-inventory" (OuterVolumeSpecName: "inventory") pod "91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" (UID: "91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.233181 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" (UID: "91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.295772 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9h8\" (UniqueName: \"kubernetes.io/projected/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-kube-api-access-mb9h8\") on node \"crc\" DevicePath \"\"" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.295810 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.295823 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.706465 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" event={"ID":"91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf","Type":"ContainerDied","Data":"05e7992ffefc5605296e80cf34788b5112e7cee4a4a9cbda3dc6332e0e17029b"} Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.706514 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e7992ffefc5605296e80cf34788b5112e7cee4a4a9cbda3dc6332e0e17029b" Jan 21 13:43:07 crc kubenswrapper[4959]: I0121 13:43:07.706525 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.698722 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbnr7"] Jan 21 13:43:17 crc kubenswrapper[4959]: E0121 13:43:17.699700 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.699719 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.699898 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.701067 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.741973 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbnr7"] Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.786633 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-catalog-content\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.786691 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9nj\" (UniqueName: \"kubernetes.io/projected/0081d09b-14f0-4f16-b5cc-2c621f886d31-kube-api-access-5m9nj\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.786814 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-utilities\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.888573 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-utilities\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.888711 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-catalog-content\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.888749 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m9nj\" (UniqueName: \"kubernetes.io/projected/0081d09b-14f0-4f16-b5cc-2c621f886d31-kube-api-access-5m9nj\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.889303 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-catalog-content\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.889418 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-utilities\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:17 crc kubenswrapper[4959]: I0121 13:43:17.910713 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m9nj\" (UniqueName: \"kubernetes.io/projected/0081d09b-14f0-4f16-b5cc-2c621f886d31-kube-api-access-5m9nj\") pod \"redhat-operators-lbnr7\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:18 crc kubenswrapper[4959]: I0121 13:43:18.028766 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:18 crc kubenswrapper[4959]: I0121 13:43:18.506047 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbnr7"] Jan 21 13:43:18 crc kubenswrapper[4959]: I0121 13:43:18.803572 4959 generic.go:334] "Generic (PLEG): container finished" podID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerID="da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e" exitCode=0 Jan 21 13:43:18 crc kubenswrapper[4959]: I0121 13:43:18.803623 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerDied","Data":"da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e"} Jan 21 13:43:18 crc kubenswrapper[4959]: I0121 13:43:18.803655 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerStarted","Data":"c0b0782d5845730dbf7f2c411f2ac781dbeac0e8aa1c7a3515df9e221b669313"} Jan 21 13:43:19 crc kubenswrapper[4959]: I0121 13:43:19.814874 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerStarted","Data":"9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803"} Jan 21 13:43:20 crc kubenswrapper[4959]: I0121 13:43:20.825649 4959 generic.go:334] "Generic (PLEG): container finished" podID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerID="9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803" exitCode=0 Jan 21 13:43:20 crc kubenswrapper[4959]: I0121 13:43:20.825695 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerDied","Data":"9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803"} Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.379304 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.379590 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.379629 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.380284 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ee363e4e1583c15674ba6308ac663c5b4e8b3fda56922564a71d99462d29340"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.380339 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://1ee363e4e1583c15674ba6308ac663c5b4e8b3fda56922564a71d99462d29340" gracePeriod=600 Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.835072 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="1ee363e4e1583c15674ba6308ac663c5b4e8b3fda56922564a71d99462d29340" exitCode=0 Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.835212 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"1ee363e4e1583c15674ba6308ac663c5b4e8b3fda56922564a71d99462d29340"} Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.835500 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91"} Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.835526 4959 scope.go:117] "RemoveContainer" containerID="d78baf625f3252214ec6b89796abc9286f5f647239a28c8736fc5a429bad3e0a" Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.838421 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerStarted","Data":"1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094"} Jan 21 13:43:21 crc kubenswrapper[4959]: I0121 13:43:21.881872 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lbnr7" podStartSLOduration=2.478880465 podStartE2EDuration="4.881849505s" podCreationTimestamp="2026-01-21 13:43:17 +0000 UTC" firstStartedPulling="2026-01-21 13:43:18.805733728 +0000 UTC m=+2059.768764281" lastFinishedPulling="2026-01-21 13:43:21.208702778 +0000 UTC m=+2062.171733321" observedRunningTime="2026-01-21 13:43:21.879896492 +0000 UTC m=+2062.842927035" watchObservedRunningTime="2026-01-21 13:43:21.881849505 +0000 UTC m=+2062.844880048" Jan 21 13:43:25 crc kubenswrapper[4959]: I0121 13:43:25.059534 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mnsbv"] Jan 21 13:43:25 crc kubenswrapper[4959]: I0121 13:43:25.068640 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mnsbv"] Jan 21 13:43:25 crc kubenswrapper[4959]: I0121 13:43:25.297323 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a209d5-9290-4c05-bff7-afeb8173fac5" path="/var/lib/kubelet/pods/46a209d5-9290-4c05-bff7-afeb8173fac5/volumes" Jan 21 13:43:28 crc kubenswrapper[4959]: I0121 13:43:28.028920 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:28 crc kubenswrapper[4959]: I0121 13:43:28.029587 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:28 crc kubenswrapper[4959]: I0121 13:43:28.070889 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:28 crc kubenswrapper[4959]: I0121 13:43:28.941544 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:28 crc kubenswrapper[4959]: I0121 13:43:28.987081 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbnr7"] Jan 21 13:43:30 crc kubenswrapper[4959]: I0121 13:43:30.912695 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lbnr7" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="registry-server" containerID="cri-o://1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094" gracePeriod=2 Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.400323 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.554263 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-catalog-content\") pod \"0081d09b-14f0-4f16-b5cc-2c621f886d31\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.554345 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-utilities\") pod \"0081d09b-14f0-4f16-b5cc-2c621f886d31\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.554430 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m9nj\" (UniqueName: \"kubernetes.io/projected/0081d09b-14f0-4f16-b5cc-2c621f886d31-kube-api-access-5m9nj\") pod \"0081d09b-14f0-4f16-b5cc-2c621f886d31\" (UID: \"0081d09b-14f0-4f16-b5cc-2c621f886d31\") " Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.555434 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-utilities" (OuterVolumeSpecName: "utilities") pod "0081d09b-14f0-4f16-b5cc-2c621f886d31" (UID: "0081d09b-14f0-4f16-b5cc-2c621f886d31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.561802 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0081d09b-14f0-4f16-b5cc-2c621f886d31-kube-api-access-5m9nj" (OuterVolumeSpecName: "kube-api-access-5m9nj") pod "0081d09b-14f0-4f16-b5cc-2c621f886d31" (UID: "0081d09b-14f0-4f16-b5cc-2c621f886d31"). InnerVolumeSpecName "kube-api-access-5m9nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.656087 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.656180 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m9nj\" (UniqueName: \"kubernetes.io/projected/0081d09b-14f0-4f16-b5cc-2c621f886d31-kube-api-access-5m9nj\") on node \"crc\" DevicePath \"\"" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.673578 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0081d09b-14f0-4f16-b5cc-2c621f886d31" (UID: "0081d09b-14f0-4f16-b5cc-2c621f886d31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.757840 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0081d09b-14f0-4f16-b5cc-2c621f886d31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.923769 4959 generic.go:334] "Generic (PLEG): container finished" podID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerID="1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094" exitCode=0 Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.923821 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerDied","Data":"1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094"} Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.923836 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbnr7" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.923852 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbnr7" event={"ID":"0081d09b-14f0-4f16-b5cc-2c621f886d31","Type":"ContainerDied","Data":"c0b0782d5845730dbf7f2c411f2ac781dbeac0e8aa1c7a3515df9e221b669313"} Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.923872 4959 scope.go:117] "RemoveContainer" containerID="1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.957336 4959 scope.go:117] "RemoveContainer" containerID="9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803" Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.966328 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbnr7"] Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.978072 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lbnr7"] Jan 21 13:43:31 crc kubenswrapper[4959]: I0121 13:43:31.986399 4959 scope.go:117] "RemoveContainer" containerID="da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e" Jan 21 13:43:32 crc kubenswrapper[4959]: I0121 13:43:32.026280 4959 scope.go:117] "RemoveContainer" containerID="1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094" Jan 21 13:43:32 crc kubenswrapper[4959]: E0121 13:43:32.026783 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094\": container with ID starting with 1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094 not found: ID does not exist" containerID="1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094" Jan 21 13:43:32 crc kubenswrapper[4959]: I0121 13:43:32.026849 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094"} err="failed to get container status \"1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094\": rpc error: code = NotFound desc = could not find container \"1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094\": container with ID starting with 1bbe7909fb68c4d594957296793a208f97785d2d7fd0bdb473d221f0d63b2094 not found: ID does not exist" Jan 21 13:43:32 crc kubenswrapper[4959]: I0121 13:43:32.026876 4959 scope.go:117] "RemoveContainer" containerID="9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803" Jan 21 13:43:32 crc kubenswrapper[4959]: E0121 13:43:32.027316 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803\": container with ID starting with 9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803 not found: ID does not exist" containerID="9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803" Jan 21 13:43:32 crc kubenswrapper[4959]: I0121 13:43:32.027344 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803"} err="failed to get container status \"9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803\": rpc error: code = NotFound desc = could not find container \"9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803\": container with ID starting with 9891eb2f4dfed44cf75cb648f7588437d2bd59f6ad2d95db041ab547a1d06803 not found: ID does not exist" Jan 21 13:43:32 crc kubenswrapper[4959]: I0121 13:43:32.027358 4959 scope.go:117] "RemoveContainer" containerID="da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e" Jan 21 13:43:32 crc kubenswrapper[4959]: E0121 13:43:32.027596 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e\": container with ID starting with da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e not found: ID does not exist" containerID="da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e" Jan 21 13:43:32 crc kubenswrapper[4959]: I0121 13:43:32.027624 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e"} err="failed to get container status \"da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e\": rpc error: code = NotFound desc = could not find container \"da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e\": container with ID starting with da0bc9921432f921fd8dbbacfe1e8817cf843a745653855eca66032093bf908e not found: ID does not exist" Jan 21 13:43:33 crc kubenswrapper[4959]: I0121 13:43:33.295659 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" path="/var/lib/kubelet/pods/0081d09b-14f0-4f16-b5cc-2c621f886d31/volumes" Jan 21 13:43:35 crc kubenswrapper[4959]: I0121 13:43:35.030310 4959 scope.go:117] "RemoveContainer" containerID="39648aec8956149a5e232f9d3e2ce69a664ac1bf4dab5870f1f52d2a5646a833" Jan 21 13:43:35 crc kubenswrapper[4959]: I0121 13:43:35.073269 4959 scope.go:117] "RemoveContainer" containerID="b41fe1ef7128f9dba919b6f450fb67cd62961492953991bcf72304f4a3d278c5" Jan 21 13:43:35 crc kubenswrapper[4959]: I0121 13:43:35.109263 4959 scope.go:117] "RemoveContainer" containerID="5c953a7530362dfff0a24dd3641fc5ce8dc9b691362b68f13d741d7066acfccd" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.150479 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g"] Jan 21 13:45:00 crc kubenswrapper[4959]: E0121 13:45:00.151680 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.151697 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4959]: E0121 13:45:00.151721 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.151729 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4959]: E0121 13:45:00.151758 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.151767 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.151981 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0081d09b-14f0-4f16-b5cc-2c621f886d31" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.152895 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.156298 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.156309 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.162319 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g"] Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.325725 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p8vv\" (UniqueName: \"kubernetes.io/projected/0219fad6-0737-4f2b-985d-63ada4eaf374-kube-api-access-8p8vv\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.325870 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0219fad6-0737-4f2b-985d-63ada4eaf374-config-volume\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.325917 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0219fad6-0737-4f2b-985d-63ada4eaf374-secret-volume\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.427040 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8vv\" (UniqueName: \"kubernetes.io/projected/0219fad6-0737-4f2b-985d-63ada4eaf374-kube-api-access-8p8vv\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.427924 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0219fad6-0737-4f2b-985d-63ada4eaf374-config-volume\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.427958 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0219fad6-0737-4f2b-985d-63ada4eaf374-secret-volume\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.429651 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0219fad6-0737-4f2b-985d-63ada4eaf374-config-volume\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.434190 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0219fad6-0737-4f2b-985d-63ada4eaf374-secret-volume\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.443831 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8vv\" (UniqueName: \"kubernetes.io/projected/0219fad6-0737-4f2b-985d-63ada4eaf374-kube-api-access-8p8vv\") pod \"collect-profiles-29483385-24z8g\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.476974 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:00 crc kubenswrapper[4959]: I0121 13:45:00.945507 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g"] Jan 21 13:45:01 crc kubenswrapper[4959]: I0121 13:45:01.644701 4959 generic.go:334] "Generic (PLEG): container finished" podID="0219fad6-0737-4f2b-985d-63ada4eaf374" containerID="ac81e689d29d93be3221f088fab6fdf5ae1a6f690b16e6ecab1ee53abee0fc82" exitCode=0 Jan 21 13:45:01 crc kubenswrapper[4959]: I0121 13:45:01.644826 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" event={"ID":"0219fad6-0737-4f2b-985d-63ada4eaf374","Type":"ContainerDied","Data":"ac81e689d29d93be3221f088fab6fdf5ae1a6f690b16e6ecab1ee53abee0fc82"} Jan 21 13:45:01 crc kubenswrapper[4959]: I0121 13:45:01.645273 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" event={"ID":"0219fad6-0737-4f2b-985d-63ada4eaf374","Type":"ContainerStarted","Data":"620fa6c6c9b7666fe8be9b1b9579355bbdff9f17b04665daf73b46628f3b84f7"} Jan 21 13:45:02 crc kubenswrapper[4959]: I0121 13:45:02.990780 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.181500 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0219fad6-0737-4f2b-985d-63ada4eaf374-secret-volume\") pod \"0219fad6-0737-4f2b-985d-63ada4eaf374\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.181635 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p8vv\" (UniqueName: \"kubernetes.io/projected/0219fad6-0737-4f2b-985d-63ada4eaf374-kube-api-access-8p8vv\") pod \"0219fad6-0737-4f2b-985d-63ada4eaf374\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.181831 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0219fad6-0737-4f2b-985d-63ada4eaf374-config-volume\") pod \"0219fad6-0737-4f2b-985d-63ada4eaf374\" (UID: \"0219fad6-0737-4f2b-985d-63ada4eaf374\") " Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.183052 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0219fad6-0737-4f2b-985d-63ada4eaf374-config-volume" (OuterVolumeSpecName: "config-volume") pod "0219fad6-0737-4f2b-985d-63ada4eaf374" (UID: "0219fad6-0737-4f2b-985d-63ada4eaf374"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.189751 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0219fad6-0737-4f2b-985d-63ada4eaf374-kube-api-access-8p8vv" (OuterVolumeSpecName: "kube-api-access-8p8vv") pod "0219fad6-0737-4f2b-985d-63ada4eaf374" (UID: "0219fad6-0737-4f2b-985d-63ada4eaf374"). InnerVolumeSpecName "kube-api-access-8p8vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.190344 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0219fad6-0737-4f2b-985d-63ada4eaf374-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0219fad6-0737-4f2b-985d-63ada4eaf374" (UID: "0219fad6-0737-4f2b-985d-63ada4eaf374"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.284024 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0219fad6-0737-4f2b-985d-63ada4eaf374-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.284286 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0219fad6-0737-4f2b-985d-63ada4eaf374-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.284345 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p8vv\" (UniqueName: \"kubernetes.io/projected/0219fad6-0737-4f2b-985d-63ada4eaf374-kube-api-access-8p8vv\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.662154 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" event={"ID":"0219fad6-0737-4f2b-985d-63ada4eaf374","Type":"ContainerDied","Data":"620fa6c6c9b7666fe8be9b1b9579355bbdff9f17b04665daf73b46628f3b84f7"} Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.662199 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620fa6c6c9b7666fe8be9b1b9579355bbdff9f17b04665daf73b46628f3b84f7" Jan 21 13:45:03 crc kubenswrapper[4959]: I0121 13:45:03.662231 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g" Jan 21 13:45:04 crc kubenswrapper[4959]: I0121 13:45:04.061920 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt"] Jan 21 13:45:04 crc kubenswrapper[4959]: I0121 13:45:04.070738 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-rvgvt"] Jan 21 13:45:05 crc kubenswrapper[4959]: I0121 13:45:05.297333 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab04d280-8b58-44e7-a789-f706b8c5f807" path="/var/lib/kubelet/pods/ab04d280-8b58-44e7-a789-f706b8c5f807/volumes" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.693529 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5hd7"] Jan 21 13:45:15 crc kubenswrapper[4959]: E0121 13:45:15.695515 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0219fad6-0737-4f2b-985d-63ada4eaf374" containerName="collect-profiles" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.695540 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0219fad6-0737-4f2b-985d-63ada4eaf374" containerName="collect-profiles" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.695730 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0219fad6-0737-4f2b-985d-63ada4eaf374" containerName="collect-profiles" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.697018 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.702772 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5hd7"] Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.809434 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-utilities\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.809919 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-catalog-content\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.809984 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxj29\" (UniqueName: \"kubernetes.io/projected/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-kube-api-access-hxj29\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.877569 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gx2gn"] Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.879658 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.896355 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx2gn"] Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.911849 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-utilities\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.911958 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-catalog-content\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.912006 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxj29\" (UniqueName: \"kubernetes.io/projected/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-kube-api-access-hxj29\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.912554 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-utilities\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.912621 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-catalog-content\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:15 crc kubenswrapper[4959]: I0121 13:45:15.941046 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxj29\" (UniqueName: \"kubernetes.io/projected/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-kube-api-access-hxj29\") pod \"redhat-marketplace-l5hd7\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.014069 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-catalog-content\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.014151 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5zkn\" (UniqueName: \"kubernetes.io/projected/659cc246-1d75-4042-901f-970acf6cb777-kube-api-access-w5zkn\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.014230 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-utilities\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.026490 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.115689 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-catalog-content\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.115742 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5zkn\" (UniqueName: \"kubernetes.io/projected/659cc246-1d75-4042-901f-970acf6cb777-kube-api-access-w5zkn\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.115801 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-utilities\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.116719 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-utilities\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.116730 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-catalog-content\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.147016 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5zkn\" (UniqueName: \"kubernetes.io/projected/659cc246-1d75-4042-901f-970acf6cb777-kube-api-access-w5zkn\") pod \"community-operators-gx2gn\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.198742 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.542588 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5hd7"] Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.774422 4959 generic.go:334] "Generic (PLEG): container finished" podID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerID="6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10" exitCode=0 Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.774473 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5hd7" event={"ID":"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b","Type":"ContainerDied","Data":"6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10"} Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.774517 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5hd7" event={"ID":"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b","Type":"ContainerStarted","Data":"af31eb8302832d66b74c923c151070572aca39125285552accd71259beaeff0e"} Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.776521 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:45:16 crc kubenswrapper[4959]: W0121 13:45:16.817890 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod659cc246_1d75_4042_901f_970acf6cb777.slice/crio-69b40806cdefc5c9d381fa33d2a52ecdb1b817b1971c8098ab3ba9ae3247023c WatchSource:0}: Error finding container 69b40806cdefc5c9d381fa33d2a52ecdb1b817b1971c8098ab3ba9ae3247023c: Status 404 returned error can't find the container with id 69b40806cdefc5c9d381fa33d2a52ecdb1b817b1971c8098ab3ba9ae3247023c Jan 21 13:45:16 crc kubenswrapper[4959]: I0121 13:45:16.821481 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx2gn"] Jan 21 13:45:17 crc kubenswrapper[4959]: I0121 13:45:17.786115 4959 generic.go:334] "Generic (PLEG): container finished" podID="659cc246-1d75-4042-901f-970acf6cb777" containerID="d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d" exitCode=0 Jan 21 13:45:17 crc kubenswrapper[4959]: I0121 13:45:17.786263 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx2gn" event={"ID":"659cc246-1d75-4042-901f-970acf6cb777","Type":"ContainerDied","Data":"d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d"} Jan 21 13:45:17 crc kubenswrapper[4959]: I0121 13:45:17.786442 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx2gn" event={"ID":"659cc246-1d75-4042-901f-970acf6cb777","Type":"ContainerStarted","Data":"69b40806cdefc5c9d381fa33d2a52ecdb1b817b1971c8098ab3ba9ae3247023c"} Jan 21 13:45:17 crc kubenswrapper[4959]: I0121 13:45:17.790973 4959 generic.go:334] "Generic (PLEG): container finished" podID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerID="30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509" exitCode=0 Jan 21 13:45:17 crc kubenswrapper[4959]: I0121 13:45:17.791021 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5hd7" event={"ID":"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b","Type":"ContainerDied","Data":"30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509"} Jan 21 13:45:18 crc kubenswrapper[4959]: I0121 13:45:18.800757 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5hd7" event={"ID":"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b","Type":"ContainerStarted","Data":"45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c"} Jan 21 13:45:18 crc kubenswrapper[4959]: I0121 13:45:18.805969 4959 generic.go:334] "Generic (PLEG): container finished" podID="659cc246-1d75-4042-901f-970acf6cb777" containerID="15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da" exitCode=0 Jan 21 13:45:18 crc kubenswrapper[4959]: I0121 13:45:18.806007 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx2gn" event={"ID":"659cc246-1d75-4042-901f-970acf6cb777","Type":"ContainerDied","Data":"15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da"} Jan 21 13:45:18 crc kubenswrapper[4959]: I0121 13:45:18.829136 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5hd7" podStartSLOduration=2.42730196 podStartE2EDuration="3.829114472s" podCreationTimestamp="2026-01-21 13:45:15 +0000 UTC" firstStartedPulling="2026-01-21 13:45:16.776270973 +0000 UTC m=+2177.739301516" lastFinishedPulling="2026-01-21 13:45:18.178083485 +0000 UTC m=+2179.141114028" observedRunningTime="2026-01-21 13:45:18.821360924 +0000 UTC m=+2179.784391477" watchObservedRunningTime="2026-01-21 13:45:18.829114472 +0000 UTC m=+2179.792145015" Jan 21 13:45:19 crc kubenswrapper[4959]: I0121 13:45:19.818113 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx2gn" event={"ID":"659cc246-1d75-4042-901f-970acf6cb777","Type":"ContainerStarted","Data":"2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b"} Jan 21 13:45:19 crc kubenswrapper[4959]: I0121 13:45:19.838197 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gx2gn" podStartSLOduration=3.465300522 podStartE2EDuration="4.838181726s" podCreationTimestamp="2026-01-21 13:45:15 +0000 UTC" firstStartedPulling="2026-01-21 13:45:17.789979792 +0000 UTC m=+2178.753010335" lastFinishedPulling="2026-01-21 13:45:19.162860996 +0000 UTC m=+2180.125891539" observedRunningTime="2026-01-21 13:45:19.835669949 +0000 UTC m=+2180.798700492" watchObservedRunningTime="2026-01-21 13:45:19.838181726 +0000 UTC m=+2180.801212269" Jan 21 13:45:21 crc kubenswrapper[4959]: I0121 13:45:21.379926 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:45:21 crc kubenswrapper[4959]: I0121 13:45:21.380289 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.027181 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.027655 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.073826 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.199163 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.199201 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.240666 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.934336 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:26 crc kubenswrapper[4959]: I0121 13:45:26.935956 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:27 crc kubenswrapper[4959]: I0121 13:45:27.507388 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5hd7"] Jan 21 13:45:28 crc kubenswrapper[4959]: I0121 13:45:28.903468 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5hd7" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="registry-server" containerID="cri-o://45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c" gracePeriod=2 Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.309846 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx2gn"] Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.310188 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gx2gn" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="registry-server" containerID="cri-o://2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b" gracePeriod=2 Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.854819 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.860513 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.913042 4959 generic.go:334] "Generic (PLEG): container finished" podID="659cc246-1d75-4042-901f-970acf6cb777" containerID="2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b" exitCode=0 Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.913141 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx2gn" event={"ID":"659cc246-1d75-4042-901f-970acf6cb777","Type":"ContainerDied","Data":"2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b"} Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.913148 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx2gn" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.913169 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx2gn" event={"ID":"659cc246-1d75-4042-901f-970acf6cb777","Type":"ContainerDied","Data":"69b40806cdefc5c9d381fa33d2a52ecdb1b817b1971c8098ab3ba9ae3247023c"} Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.913188 4959 scope.go:117] "RemoveContainer" containerID="2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.915983 4959 generic.go:334] "Generic (PLEG): container finished" podID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerID="45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c" exitCode=0 Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.916031 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5hd7" event={"ID":"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b","Type":"ContainerDied","Data":"45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c"} Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.916045 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5hd7" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.916062 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5hd7" event={"ID":"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b","Type":"ContainerDied","Data":"af31eb8302832d66b74c923c151070572aca39125285552accd71259beaeff0e"} Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.936367 4959 scope.go:117] "RemoveContainer" containerID="15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.956975 4959 scope.go:117] "RemoveContainer" containerID="d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.973190 4959 scope.go:117] "RemoveContainer" containerID="2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b" Jan 21 13:45:29 crc kubenswrapper[4959]: E0121 13:45:29.973519 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b\": container with ID starting with 2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b not found: ID does not exist" containerID="2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.973551 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b"} err="failed to get container status \"2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b\": rpc error: code = NotFound desc = could not find container \"2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b\": container with ID starting with 2c1ba9aba58ffd588543fddf1192c6ec2211544b0a1044fa0e3cc098dbb9cf7b not found: ID does not exist" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.973569 4959 scope.go:117] "RemoveContainer" containerID="15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da" Jan 21 13:45:29 crc kubenswrapper[4959]: E0121 13:45:29.973824 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da\": container with ID starting with 15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da not found: ID does not exist" containerID="15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.973850 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da"} err="failed to get container status \"15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da\": rpc error: code = NotFound desc = could not find container \"15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da\": container with ID starting with 15e73abe43d96972150f416459ec8d5fb4ab13955b6ffd095bc59772d02fc4da not found: ID does not exist" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.973863 4959 scope.go:117] "RemoveContainer" containerID="d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d" Jan 21 13:45:29 crc kubenswrapper[4959]: E0121 13:45:29.974250 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d\": container with ID starting with d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d not found: ID does not exist" containerID="d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.974271 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d"} err="failed to get container status \"d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d\": rpc error: code = NotFound desc = could not find container \"d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d\": container with ID starting with d39e1838eba200ec617f1339bf51d641eb88add5c56c8ec26335de484b44f80d not found: ID does not exist" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.974285 4959 scope.go:117] "RemoveContainer" containerID="45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.974942 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-catalog-content\") pod \"659cc246-1d75-4042-901f-970acf6cb777\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.975008 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5zkn\" (UniqueName: \"kubernetes.io/projected/659cc246-1d75-4042-901f-970acf6cb777-kube-api-access-w5zkn\") pod \"659cc246-1d75-4042-901f-970acf6cb777\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.975024 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-utilities\") pod \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.975212 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxj29\" (UniqueName: \"kubernetes.io/projected/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-kube-api-access-hxj29\") pod \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.975863 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-catalog-content\") pod \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\" (UID: \"a8a7a8cc-99bd-4476-a9a2-4f2139b1023b\") " Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.975968 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-utilities" (OuterVolumeSpecName: "utilities") pod "a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" (UID: "a8a7a8cc-99bd-4476-a9a2-4f2139b1023b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.976004 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-utilities\") pod \"659cc246-1d75-4042-901f-970acf6cb777\" (UID: \"659cc246-1d75-4042-901f-970acf6cb777\") " Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.976839 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-utilities" (OuterVolumeSpecName: "utilities") pod "659cc246-1d75-4042-901f-970acf6cb777" (UID: "659cc246-1d75-4042-901f-970acf6cb777"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.977052 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.977074 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.980163 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-kube-api-access-hxj29" (OuterVolumeSpecName: "kube-api-access-hxj29") pod "a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" (UID: "a8a7a8cc-99bd-4476-a9a2-4f2139b1023b"). InnerVolumeSpecName "kube-api-access-hxj29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.980894 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659cc246-1d75-4042-901f-970acf6cb777-kube-api-access-w5zkn" (OuterVolumeSpecName: "kube-api-access-w5zkn") pod "659cc246-1d75-4042-901f-970acf6cb777" (UID: "659cc246-1d75-4042-901f-970acf6cb777"). InnerVolumeSpecName "kube-api-access-w5zkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.992246 4959 scope.go:117] "RemoveContainer" containerID="30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509" Jan 21 13:45:29 crc kubenswrapper[4959]: I0121 13:45:29.997275 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" (UID: "a8a7a8cc-99bd-4476-a9a2-4f2139b1023b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.008809 4959 scope.go:117] "RemoveContainer" containerID="6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.025961 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "659cc246-1d75-4042-901f-970acf6cb777" (UID: "659cc246-1d75-4042-901f-970acf6cb777"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.061524 4959 scope.go:117] "RemoveContainer" containerID="45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c" Jan 21 13:45:30 crc kubenswrapper[4959]: E0121 13:45:30.061957 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c\": container with ID starting with 45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c not found: ID does not exist" containerID="45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.062014 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c"} err="failed to get container status \"45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c\": rpc error: code = NotFound desc = could not find container \"45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c\": container with ID starting with 45f06e71be06daebb816c697e924e86aa1726b1289faadf9d0c94ee1eda4d04c not found: ID does not exist" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.062048 4959 scope.go:117] "RemoveContainer" containerID="30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509" Jan 21 13:45:30 crc kubenswrapper[4959]: E0121 13:45:30.062429 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509\": container with ID starting with 30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509 not found: ID does not exist" containerID="30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.062454 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509"} err="failed to get container status \"30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509\": rpc error: code = NotFound desc = could not find container \"30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509\": container with ID starting with 30c02fdce03f905e1827b6026cb1964a2b707f0fa0ae5df4c8390efb2100c509 not found: ID does not exist" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.062471 4959 scope.go:117] "RemoveContainer" containerID="6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10" Jan 21 13:45:30 crc kubenswrapper[4959]: E0121 13:45:30.062764 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10\": container with ID starting with 6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10 not found: ID does not exist" containerID="6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.062822 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10"} err="failed to get container status \"6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10\": rpc error: code = NotFound desc = could not find container \"6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10\": container with ID starting with 6736ae14d7fbc6778982790e9bc8194d1fdf6d6f20476fe6f11a0d944025aa10 not found: ID does not exist" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.079395 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.079438 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659cc246-1d75-4042-901f-970acf6cb777-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.079453 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5zkn\" (UniqueName: \"kubernetes.io/projected/659cc246-1d75-4042-901f-970acf6cb777-kube-api-access-w5zkn\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.079468 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxj29\" (UniqueName: \"kubernetes.io/projected/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b-kube-api-access-hxj29\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.249578 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx2gn"] Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.262958 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gx2gn"] Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.270225 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5hd7"] Jan 21 13:45:30 crc kubenswrapper[4959]: I0121 13:45:30.275549 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5hd7"] Jan 21 13:45:31 crc kubenswrapper[4959]: I0121 13:45:31.301512 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659cc246-1d75-4042-901f-970acf6cb777" path="/var/lib/kubelet/pods/659cc246-1d75-4042-901f-970acf6cb777/volumes" Jan 21 13:45:31 crc kubenswrapper[4959]: I0121 13:45:31.302661 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" path="/var/lib/kubelet/pods/a8a7a8cc-99bd-4476-a9a2-4f2139b1023b/volumes" Jan 21 13:45:35 crc kubenswrapper[4959]: I0121 13:45:35.247655 4959 scope.go:117] "RemoveContainer" containerID="021f2ec969185f1f287f423330c4925953e7ff57f06fe920de426b108ad01138" Jan 21 13:45:51 crc kubenswrapper[4959]: I0121 13:45:51.379778 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:45:51 crc kubenswrapper[4959]: I0121 13:45:51.380302 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.738716 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8fj92"] Jan 21 13:46:12 crc kubenswrapper[4959]: E0121 13:46:12.739772 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="extract-content" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.739788 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="extract-content" Jan 21 13:46:12 crc kubenswrapper[4959]: E0121 13:46:12.739807 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="extract-utilities" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.739815 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="extract-utilities" Jan 21 13:46:12 crc kubenswrapper[4959]: E0121 13:46:12.739835 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="extract-utilities" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.739842 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="extract-utilities" Jan 21 13:46:12 crc kubenswrapper[4959]: E0121 13:46:12.739862 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="registry-server" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.739869 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="registry-server" Jan 21 13:46:12 crc kubenswrapper[4959]: E0121 13:46:12.739891 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="extract-content" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.739898 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="extract-content" Jan 21 13:46:12 crc kubenswrapper[4959]: E0121 13:46:12.739907 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="registry-server" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.739914 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="registry-server" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.740140 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a7a8cc-99bd-4476-a9a2-4f2139b1023b" containerName="registry-server" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.740158 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="659cc246-1d75-4042-901f-970acf6cb777" containerName="registry-server" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.741694 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.747666 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fj92"] Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.853400 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-utilities\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.853682 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qpm\" (UniqueName: \"kubernetes.io/projected/c6d7cfbd-b948-4110-8ca9-f802c882e828-kube-api-access-56qpm\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.853727 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-catalog-content\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.955724 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-catalog-content\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.955894 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-utilities\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.955956 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56qpm\" (UniqueName: \"kubernetes.io/projected/c6d7cfbd-b948-4110-8ca9-f802c882e828-kube-api-access-56qpm\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.956277 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-catalog-content\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.956384 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-utilities\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:12 crc kubenswrapper[4959]: I0121 13:46:12.977224 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56qpm\" (UniqueName: \"kubernetes.io/projected/c6d7cfbd-b948-4110-8ca9-f802c882e828-kube-api-access-56qpm\") pod \"certified-operators-8fj92\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:13 crc kubenswrapper[4959]: I0121 13:46:13.067851 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:13 crc kubenswrapper[4959]: I0121 13:46:13.551017 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fj92"] Jan 21 13:46:13 crc kubenswrapper[4959]: W0121 13:46:13.552560 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d7cfbd_b948_4110_8ca9_f802c882e828.slice/crio-37798b0b33a76c1acfcbc8d806887b8533f5100be401f4f484197a111da744f9 WatchSource:0}: Error finding container 37798b0b33a76c1acfcbc8d806887b8533f5100be401f4f484197a111da744f9: Status 404 returned error can't find the container with id 37798b0b33a76c1acfcbc8d806887b8533f5100be401f4f484197a111da744f9 Jan 21 13:46:14 crc kubenswrapper[4959]: I0121 13:46:14.283221 4959 generic.go:334] "Generic (PLEG): container finished" podID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerID="066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7" exitCode=0 Jan 21 13:46:14 crc kubenswrapper[4959]: I0121 13:46:14.283319 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerDied","Data":"066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7"} Jan 21 13:46:14 crc kubenswrapper[4959]: I0121 13:46:14.283534 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerStarted","Data":"37798b0b33a76c1acfcbc8d806887b8533f5100be401f4f484197a111da744f9"} Jan 21 13:46:15 crc kubenswrapper[4959]: I0121 13:46:15.300425 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerStarted","Data":"73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac"} Jan 21 13:46:16 crc kubenswrapper[4959]: I0121 13:46:16.301299 4959 generic.go:334] "Generic (PLEG): container finished" podID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerID="73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac" exitCode=0 Jan 21 13:46:16 crc kubenswrapper[4959]: I0121 13:46:16.301348 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerDied","Data":"73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac"} Jan 21 13:46:18 crc kubenswrapper[4959]: I0121 13:46:18.325463 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerStarted","Data":"1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f"} Jan 21 13:46:18 crc kubenswrapper[4959]: I0121 13:46:18.347332 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8fj92" podStartSLOduration=2.869295833 podStartE2EDuration="6.347309731s" podCreationTimestamp="2026-01-21 13:46:12 +0000 UTC" firstStartedPulling="2026-01-21 13:46:14.285199592 +0000 UTC m=+2235.248230145" lastFinishedPulling="2026-01-21 13:46:17.7632135 +0000 UTC m=+2238.726244043" observedRunningTime="2026-01-21 13:46:18.342259356 +0000 UTC m=+2239.305289909" watchObservedRunningTime="2026-01-21 13:46:18.347309731 +0000 UTC m=+2239.310340264" Jan 21 13:46:21 crc kubenswrapper[4959]: I0121 13:46:21.379344 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:46:21 crc kubenswrapper[4959]: I0121 13:46:21.379972 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:46:21 crc kubenswrapper[4959]: I0121 13:46:21.380018 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:46:21 crc kubenswrapper[4959]: I0121 13:46:21.380787 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:46:21 crc kubenswrapper[4959]: I0121 13:46:21.380843 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" gracePeriod=600 Jan 21 13:46:22 crc kubenswrapper[4959]: E0121 13:46:22.003586 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:46:22 crc kubenswrapper[4959]: I0121 13:46:22.359493 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" exitCode=0 Jan 21 13:46:22 crc kubenswrapper[4959]: I0121 13:46:22.359558 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91"} Jan 21 13:46:22 crc kubenswrapper[4959]: I0121 13:46:22.359625 4959 scope.go:117] "RemoveContainer" containerID="1ee363e4e1583c15674ba6308ac663c5b4e8b3fda56922564a71d99462d29340" Jan 21 13:46:22 crc kubenswrapper[4959]: I0121 13:46:22.360429 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:46:22 crc kubenswrapper[4959]: E0121 13:46:22.360995 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:46:23 crc kubenswrapper[4959]: I0121 13:46:23.068050 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:23 crc kubenswrapper[4959]: I0121 13:46:23.068359 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:23 crc kubenswrapper[4959]: I0121 13:46:23.126196 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:23 crc kubenswrapper[4959]: I0121 13:46:23.410877 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:23 crc kubenswrapper[4959]: I0121 13:46:23.450441 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fj92"] Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.386689 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8fj92" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="registry-server" containerID="cri-o://1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f" gracePeriod=2 Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.870525 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.990293 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-catalog-content\") pod \"c6d7cfbd-b948-4110-8ca9-f802c882e828\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.990447 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56qpm\" (UniqueName: \"kubernetes.io/projected/c6d7cfbd-b948-4110-8ca9-f802c882e828-kube-api-access-56qpm\") pod \"c6d7cfbd-b948-4110-8ca9-f802c882e828\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.990629 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-utilities\") pod \"c6d7cfbd-b948-4110-8ca9-f802c882e828\" (UID: \"c6d7cfbd-b948-4110-8ca9-f802c882e828\") " Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.991570 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-utilities" (OuterVolumeSpecName: "utilities") pod "c6d7cfbd-b948-4110-8ca9-f802c882e828" (UID: "c6d7cfbd-b948-4110-8ca9-f802c882e828"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:46:25 crc kubenswrapper[4959]: I0121 13:46:25.997349 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d7cfbd-b948-4110-8ca9-f802c882e828-kube-api-access-56qpm" (OuterVolumeSpecName: "kube-api-access-56qpm") pod "c6d7cfbd-b948-4110-8ca9-f802c882e828" (UID: "c6d7cfbd-b948-4110-8ca9-f802c882e828"). InnerVolumeSpecName "kube-api-access-56qpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.034920 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6d7cfbd-b948-4110-8ca9-f802c882e828" (UID: "c6d7cfbd-b948-4110-8ca9-f802c882e828"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.092695 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.092768 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d7cfbd-b948-4110-8ca9-f802c882e828-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.092805 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56qpm\" (UniqueName: \"kubernetes.io/projected/c6d7cfbd-b948-4110-8ca9-f802c882e828-kube-api-access-56qpm\") on node \"crc\" DevicePath \"\"" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.397224 4959 generic.go:334] "Generic (PLEG): container finished" podID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerID="1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f" exitCode=0 Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.397274 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerDied","Data":"1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f"} Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.397304 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fj92" event={"ID":"c6d7cfbd-b948-4110-8ca9-f802c882e828","Type":"ContainerDied","Data":"37798b0b33a76c1acfcbc8d806887b8533f5100be401f4f484197a111da744f9"} Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.397337 4959 scope.go:117] "RemoveContainer" containerID="1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.397354 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fj92" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.432820 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fj92"] Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.434507 4959 scope.go:117] "RemoveContainer" containerID="73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.441232 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8fj92"] Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.451788 4959 scope.go:117] "RemoveContainer" containerID="066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.496855 4959 scope.go:117] "RemoveContainer" containerID="1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f" Jan 21 13:46:26 crc kubenswrapper[4959]: E0121 13:46:26.497280 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f\": container with ID starting with 1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f not found: ID does not exist" containerID="1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.497314 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f"} err="failed to get container status \"1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f\": rpc error: code = NotFound desc = could not find container \"1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f\": container with ID starting with 1d4032000400f24d0881efca76a201af241eb88e5c77073237eaf234ae65350f not found: ID does not exist" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.497335 4959 scope.go:117] "RemoveContainer" containerID="73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac" Jan 21 13:46:26 crc kubenswrapper[4959]: E0121 13:46:26.497543 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac\": container with ID starting with 73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac not found: ID does not exist" containerID="73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.497570 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac"} err="failed to get container status \"73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac\": rpc error: code = NotFound desc = could not find container \"73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac\": container with ID starting with 73ce368771c9ad248fa9fa749126f0dd08015dd67a2d61fdd588450034fbbfac not found: ID does not exist" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.497588 4959 scope.go:117] "RemoveContainer" containerID="066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7" Jan 21 13:46:26 crc kubenswrapper[4959]: E0121 13:46:26.497881 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7\": container with ID starting with 066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7 not found: ID does not exist" containerID="066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7" Jan 21 13:46:26 crc kubenswrapper[4959]: I0121 13:46:26.497924 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7"} err="failed to get container status \"066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7\": rpc error: code = NotFound desc = could not find container \"066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7\": container with ID starting with 066ff77d2267e05c9a90d47e09d28e14372cf91bacb1b9b4ad84de9bf8e37ec7 not found: ID does not exist" Jan 21 13:46:27 crc kubenswrapper[4959]: I0121 13:46:27.296560 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" path="/var/lib/kubelet/pods/c6d7cfbd-b948-4110-8ca9-f802c882e828/volumes" Jan 21 13:46:35 crc kubenswrapper[4959]: I0121 13:46:35.298148 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:46:35 crc kubenswrapper[4959]: E0121 13:46:35.302857 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:46:46 crc kubenswrapper[4959]: I0121 13:46:46.286958 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:46:46 crc kubenswrapper[4959]: E0121 13:46:46.287708 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:47:01 crc kubenswrapper[4959]: I0121 13:47:01.286266 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:47:01 crc kubenswrapper[4959]: E0121 13:47:01.286997 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:47:14 crc kubenswrapper[4959]: I0121 13:47:14.287015 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:47:14 crc kubenswrapper[4959]: E0121 13:47:14.288228 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.039083 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.048305 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.058806 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.067143 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.073938 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.080530 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.086521 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vg6ts"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.093896 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jzgdz"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.099753 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5pbh2"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.106487 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h5mm2"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.113186 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-944s7"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.119326 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.125467 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5s7cb"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.131863 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t89bk"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.138852 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4rgnj"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.146239 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t89bk"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.153638 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.160915 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gwjps"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.167108 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v"] Jan 21 13:47:24 crc kubenswrapper[4959]: I0121 13:47:24.172681 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4ng5v"] Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.286712 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:47:25 crc kubenswrapper[4959]: E0121 13:47:25.287058 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.298890 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c764408-7cb2-4537-b591-626ea5924406" path="/var/lib/kubelet/pods/6c764408-7cb2-4537-b591-626ea5924406/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.299758 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74200132-4df6-40f7-a62d-c19984036788" path="/var/lib/kubelet/pods/74200132-4df6-40f7-a62d-c19984036788/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.300614 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5" path="/var/lib/kubelet/pods/8c1765e8-f8a2-4d09-8f76-ddb5fcaecaf5/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.301466 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf" path="/var/lib/kubelet/pods/91398f0b-edf0-4c8c-a0ab-f9ae0fecffdf/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.303005 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3837c43-f9b9-4d7f-80a2-26b582090af2" path="/var/lib/kubelet/pods/b3837c43-f9b9-4d7f-80a2-26b582090af2/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.303832 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5caad2c-f675-4a5f-8c5d-c711444ce2de" path="/var/lib/kubelet/pods/b5caad2c-f675-4a5f-8c5d-c711444ce2de/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.304609 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1" path="/var/lib/kubelet/pods/dcc0b2a2-957b-4ba9-9fdb-602b4f620ec1/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.306005 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e069780c-b1ae-4b75-8724-fe682e5a762d" path="/var/lib/kubelet/pods/e069780c-b1ae-4b75-8724-fe682e5a762d/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.306762 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e324e7a3-6b8a-4c28-a870-c03a9f772439" path="/var/lib/kubelet/pods/e324e7a3-6b8a-4c28-a870-c03a9f772439/volumes" Jan 21 13:47:25 crc kubenswrapper[4959]: I0121 13:47:25.307490 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c" path="/var/lib/kubelet/pods/f4f9eed6-82e4-4ccf-bdd7-19b6e92b521c/volumes" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.718993 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p"] Jan 21 13:47:29 crc kubenswrapper[4959]: E0121 13:47:29.720068 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="extract-content" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.720082 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="extract-content" Jan 21 13:47:29 crc kubenswrapper[4959]: E0121 13:47:29.720119 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="registry-server" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.720128 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="registry-server" Jan 21 13:47:29 crc kubenswrapper[4959]: E0121 13:47:29.720141 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="extract-utilities" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.720147 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="extract-utilities" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.720313 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d7cfbd-b948-4110-8ca9-f802c882e828" containerName="registry-server" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.720890 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.723449 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.723742 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.723798 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.723839 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.727549 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.736911 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p"] Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.739648 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.739702 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.739739 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.739768 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.739829 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhlxh\" (UniqueName: \"kubernetes.io/projected/544cfb93-3a88-4fe7-b4b4-b02447782767-kube-api-access-rhlxh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.841835 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.841891 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.841967 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.841990 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.842032 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhlxh\" (UniqueName: \"kubernetes.io/projected/544cfb93-3a88-4fe7-b4b4-b02447782767-kube-api-access-rhlxh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.849060 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.849085 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.849446 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.856822 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:29 crc kubenswrapper[4959]: I0121 13:47:29.859857 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhlxh\" (UniqueName: \"kubernetes.io/projected/544cfb93-3a88-4fe7-b4b4-b02447782767-kube-api-access-rhlxh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:30 crc kubenswrapper[4959]: I0121 13:47:30.041305 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:30 crc kubenswrapper[4959]: I0121 13:47:30.549258 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p"] Jan 21 13:47:30 crc kubenswrapper[4959]: I0121 13:47:30.909381 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" event={"ID":"544cfb93-3a88-4fe7-b4b4-b02447782767","Type":"ContainerStarted","Data":"491fd7a86e67a2de1604f66fd6482a30d2ee6fc378ffa3f55cca73aa6ae54c2a"} Jan 21 13:47:31 crc kubenswrapper[4959]: I0121 13:47:31.919305 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" event={"ID":"544cfb93-3a88-4fe7-b4b4-b02447782767","Type":"ContainerStarted","Data":"aecbd694e945ce403b1fa8e13f1f89fe6a1a5783d0e38d568855211de46245e3"} Jan 21 13:47:31 crc kubenswrapper[4959]: I0121 13:47:31.934815 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" podStartSLOduration=2.5132548999999997 podStartE2EDuration="2.93479888s" podCreationTimestamp="2026-01-21 13:47:29 +0000 UTC" firstStartedPulling="2026-01-21 13:47:30.560703152 +0000 UTC m=+2311.523733695" lastFinishedPulling="2026-01-21 13:47:30.982247132 +0000 UTC m=+2311.945277675" observedRunningTime="2026-01-21 13:47:31.933345481 +0000 UTC m=+2312.896376024" watchObservedRunningTime="2026-01-21 13:47:31.93479888 +0000 UTC m=+2312.897829413" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.355421 4959 scope.go:117] "RemoveContainer" containerID="0e981c57a0b8f29ef0dcaa90f451747f609e37dbc0b4e6084782ef92f5228288" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.407685 4959 scope.go:117] "RemoveContainer" containerID="4dfb5ce5974fee8b569de5047c4e7d7f224e13856ad62f29f744840e00b5893c" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.452030 4959 scope.go:117] "RemoveContainer" containerID="81c13dad251e94bac395b7b7c1210b6eff9c3ab41d922083610eb2dbd6e79cc8" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.526670 4959 scope.go:117] "RemoveContainer" containerID="8c48890c57a0ca7feb632145b173ac8398c4d935ce5faf42de9aa90e1d66dd1c" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.577438 4959 scope.go:117] "RemoveContainer" containerID="a4f962ff3b56d26d979ed8e38e9942deb73fcf868f4d0c54e3b31364ec7c1dc2" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.614059 4959 scope.go:117] "RemoveContainer" containerID="59f8021206f94808eda01634fadeef263d6fbe0450cfc0526768e11f83f749ea" Jan 21 13:47:35 crc kubenswrapper[4959]: I0121 13:47:35.665822 4959 scope.go:117] "RemoveContainer" containerID="00efa2327931af6fbcb3cb4c2bf9ea78dd73e7776612133feee4703af8ee0b70" Jan 21 13:47:39 crc kubenswrapper[4959]: I0121 13:47:39.290637 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:47:39 crc kubenswrapper[4959]: E0121 13:47:39.291383 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:47:44 crc kubenswrapper[4959]: I0121 13:47:44.015190 4959 generic.go:334] "Generic (PLEG): container finished" podID="544cfb93-3a88-4fe7-b4b4-b02447782767" containerID="aecbd694e945ce403b1fa8e13f1f89fe6a1a5783d0e38d568855211de46245e3" exitCode=0 Jan 21 13:47:44 crc kubenswrapper[4959]: I0121 13:47:44.015274 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" event={"ID":"544cfb93-3a88-4fe7-b4b4-b02447782767","Type":"ContainerDied","Data":"aecbd694e945ce403b1fa8e13f1f89fe6a1a5783d0e38d568855211de46245e3"} Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.517512 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.637534 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-inventory\") pod \"544cfb93-3a88-4fe7-b4b4-b02447782767\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.637669 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ssh-key-openstack-edpm-ipam\") pod \"544cfb93-3a88-4fe7-b4b4-b02447782767\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.637697 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-repo-setup-combined-ca-bundle\") pod \"544cfb93-3a88-4fe7-b4b4-b02447782767\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.637750 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhlxh\" (UniqueName: \"kubernetes.io/projected/544cfb93-3a88-4fe7-b4b4-b02447782767-kube-api-access-rhlxh\") pod \"544cfb93-3a88-4fe7-b4b4-b02447782767\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.637800 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ceph\") pod \"544cfb93-3a88-4fe7-b4b4-b02447782767\" (UID: \"544cfb93-3a88-4fe7-b4b4-b02447782767\") " Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.643403 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544cfb93-3a88-4fe7-b4b4-b02447782767-kube-api-access-rhlxh" (OuterVolumeSpecName: "kube-api-access-rhlxh") pod "544cfb93-3a88-4fe7-b4b4-b02447782767" (UID: "544cfb93-3a88-4fe7-b4b4-b02447782767"). InnerVolumeSpecName "kube-api-access-rhlxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.659504 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ceph" (OuterVolumeSpecName: "ceph") pod "544cfb93-3a88-4fe7-b4b4-b02447782767" (UID: "544cfb93-3a88-4fe7-b4b4-b02447782767"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.659555 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "544cfb93-3a88-4fe7-b4b4-b02447782767" (UID: "544cfb93-3a88-4fe7-b4b4-b02447782767"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.664058 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-inventory" (OuterVolumeSpecName: "inventory") pod "544cfb93-3a88-4fe7-b4b4-b02447782767" (UID: "544cfb93-3a88-4fe7-b4b4-b02447782767"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.666014 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "544cfb93-3a88-4fe7-b4b4-b02447782767" (UID: "544cfb93-3a88-4fe7-b4b4-b02447782767"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.739967 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.740002 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.740017 4959 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.740029 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhlxh\" (UniqueName: \"kubernetes.io/projected/544cfb93-3a88-4fe7-b4b4-b02447782767-kube-api-access-rhlxh\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:45 crc kubenswrapper[4959]: I0121 13:47:45.740049 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/544cfb93-3a88-4fe7-b4b4-b02447782767-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.033464 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" event={"ID":"544cfb93-3a88-4fe7-b4b4-b02447782767","Type":"ContainerDied","Data":"491fd7a86e67a2de1604f66fd6482a30d2ee6fc378ffa3f55cca73aa6ae54c2a"} Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.033505 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="491fd7a86e67a2de1604f66fd6482a30d2ee6fc378ffa3f55cca73aa6ae54c2a" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.033515 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.116859 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j"] Jan 21 13:47:46 crc kubenswrapper[4959]: E0121 13:47:46.117230 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544cfb93-3a88-4fe7-b4b4-b02447782767" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.117252 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="544cfb93-3a88-4fe7-b4b4-b02447782767" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.117400 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="544cfb93-3a88-4fe7-b4b4-b02447782767" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.117959 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.121309 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.121541 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.121702 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.121822 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.123750 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.130447 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j"] Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.145660 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.145701 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.145795 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbqv\" (UniqueName: \"kubernetes.io/projected/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-kube-api-access-mnbqv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.145817 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.145902 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.247201 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.247266 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.247416 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbqv\" (UniqueName: \"kubernetes.io/projected/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-kube-api-access-mnbqv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.247449 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.247510 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.251359 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.251447 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.251877 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.258926 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.263582 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbqv\" (UniqueName: \"kubernetes.io/projected/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-kube-api-access-mnbqv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.435563 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:47:46 crc kubenswrapper[4959]: I0121 13:47:46.950899 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j"] Jan 21 13:47:47 crc kubenswrapper[4959]: I0121 13:47:47.043506 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" event={"ID":"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab","Type":"ContainerStarted","Data":"fc97969e3f3430e56412473d8855255e865c04176e710b7625a32840c511a7d9"} Jan 21 13:47:48 crc kubenswrapper[4959]: I0121 13:47:48.053068 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" event={"ID":"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab","Type":"ContainerStarted","Data":"b30ebbb7f1a0adc6603120ef2e5dcd2bc97609214fa9053ca115b3897082708a"} Jan 21 13:47:48 crc kubenswrapper[4959]: I0121 13:47:48.077859 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" podStartSLOduration=1.456514739 podStartE2EDuration="2.07784205s" podCreationTimestamp="2026-01-21 13:47:46 +0000 UTC" firstStartedPulling="2026-01-21 13:47:46.961619911 +0000 UTC m=+2327.924650454" lastFinishedPulling="2026-01-21 13:47:47.582947222 +0000 UTC m=+2328.545977765" observedRunningTime="2026-01-21 13:47:48.068637123 +0000 UTC m=+2329.031667666" watchObservedRunningTime="2026-01-21 13:47:48.07784205 +0000 UTC m=+2329.040872593" Jan 21 13:47:52 crc kubenswrapper[4959]: I0121 13:47:52.286176 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:47:52 crc kubenswrapper[4959]: E0121 13:47:52.286768 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:48:07 crc kubenswrapper[4959]: I0121 13:48:07.286784 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:48:07 crc kubenswrapper[4959]: E0121 13:48:07.287644 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:48:20 crc kubenswrapper[4959]: I0121 13:48:20.286338 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:48:20 crc kubenswrapper[4959]: E0121 13:48:20.287108 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:48:33 crc kubenswrapper[4959]: I0121 13:48:33.287201 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:48:33 crc kubenswrapper[4959]: E0121 13:48:33.288209 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:48:35 crc kubenswrapper[4959]: I0121 13:48:35.812859 4959 scope.go:117] "RemoveContainer" containerID="88697ecdff62c0d131893c2400a5fa89d581e85de411bfd4a0a5fba867bcf42b" Jan 21 13:48:45 crc kubenswrapper[4959]: I0121 13:48:45.287186 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:48:45 crc kubenswrapper[4959]: E0121 13:48:45.288197 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:48:59 crc kubenswrapper[4959]: I0121 13:48:59.293801 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:48:59 crc kubenswrapper[4959]: E0121 13:48:59.294725 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:49:11 crc kubenswrapper[4959]: I0121 13:49:11.286759 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:49:11 crc kubenswrapper[4959]: E0121 13:49:11.287563 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:49:22 crc kubenswrapper[4959]: I0121 13:49:22.286821 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:49:22 crc kubenswrapper[4959]: E0121 13:49:22.287585 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:49:26 crc kubenswrapper[4959]: I0121 13:49:26.873825 4959 generic.go:334] "Generic (PLEG): container finished" podID="5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" containerID="b30ebbb7f1a0adc6603120ef2e5dcd2bc97609214fa9053ca115b3897082708a" exitCode=0 Jan 21 13:49:26 crc kubenswrapper[4959]: I0121 13:49:26.873898 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" event={"ID":"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab","Type":"ContainerDied","Data":"b30ebbb7f1a0adc6603120ef2e5dcd2bc97609214fa9053ca115b3897082708a"} Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.294853 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.416411 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnbqv\" (UniqueName: \"kubernetes.io/projected/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-kube-api-access-mnbqv\") pod \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.417220 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ceph\") pod \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.417695 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-inventory\") pod \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.418514 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-bootstrap-combined-ca-bundle\") pod \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.419021 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ssh-key-openstack-edpm-ipam\") pod \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\" (UID: \"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab\") " Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.427114 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-kube-api-access-mnbqv" (OuterVolumeSpecName: "kube-api-access-mnbqv") pod "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" (UID: "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab"). InnerVolumeSpecName "kube-api-access-mnbqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.427833 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" (UID: "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.428910 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ceph" (OuterVolumeSpecName: "ceph") pod "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" (UID: "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.451232 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-inventory" (OuterVolumeSpecName: "inventory") pod "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" (UID: "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.455453 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" (UID: "5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.523360 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.523391 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnbqv\" (UniqueName: \"kubernetes.io/projected/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-kube-api-access-mnbqv\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.523400 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.523420 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.523429 4959 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.895655 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" event={"ID":"5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab","Type":"ContainerDied","Data":"fc97969e3f3430e56412473d8855255e865c04176e710b7625a32840c511a7d9"} Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.895710 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc97969e3f3430e56412473d8855255e865c04176e710b7625a32840c511a7d9" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.896405 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.988418 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j"] Jan 21 13:49:28 crc kubenswrapper[4959]: E0121 13:49:28.989148 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.989270 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.989695 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.990737 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.993012 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.993181 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.993432 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.993555 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:49:28 crc kubenswrapper[4959]: I0121 13:49:28.993904 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.012886 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j"] Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.140560 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.141169 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pfxn\" (UniqueName: \"kubernetes.io/projected/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-kube-api-access-9pfxn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.141398 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.141465 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.244024 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.244177 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.244226 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.244299 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pfxn\" (UniqueName: \"kubernetes.io/projected/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-kube-api-access-9pfxn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.249929 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.250025 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.251821 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.261792 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pfxn\" (UniqueName: \"kubernetes.io/projected/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-kube-api-access-9pfxn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.312935 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.835909 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j"] Jan 21 13:49:29 crc kubenswrapper[4959]: I0121 13:49:29.907725 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" event={"ID":"8ca1b87c-733e-4b60-b3f9-c8efd8c56527","Type":"ContainerStarted","Data":"82ab81d551a825a31d9d32df3145ebb0ea29dc9ca93b14c869a4c584a8953025"} Jan 21 13:49:30 crc kubenswrapper[4959]: I0121 13:49:30.917344 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" event={"ID":"8ca1b87c-733e-4b60-b3f9-c8efd8c56527","Type":"ContainerStarted","Data":"2da38bf174075db2d0491d2401eac61204af76a73806106cfdb580a8aeea4c1f"} Jan 21 13:49:30 crc kubenswrapper[4959]: I0121 13:49:30.940174 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" podStartSLOduration=2.522559583 podStartE2EDuration="2.940155818s" podCreationTimestamp="2026-01-21 13:49:28 +0000 UTC" firstStartedPulling="2026-01-21 13:49:29.840323503 +0000 UTC m=+2430.803354046" lastFinishedPulling="2026-01-21 13:49:30.257919708 +0000 UTC m=+2431.220950281" observedRunningTime="2026-01-21 13:49:30.933872157 +0000 UTC m=+2431.896902700" watchObservedRunningTime="2026-01-21 13:49:30.940155818 +0000 UTC m=+2431.903186361" Jan 21 13:49:34 crc kubenswrapper[4959]: I0121 13:49:34.286944 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:49:34 crc kubenswrapper[4959]: E0121 13:49:34.287827 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:49:35 crc kubenswrapper[4959]: I0121 13:49:35.891993 4959 scope.go:117] "RemoveContainer" containerID="79be8b58e01794df661a2d9ee329565b3dc98f206dd3e73a7bc302622d12de52" Jan 21 13:49:35 crc kubenswrapper[4959]: I0121 13:49:35.925181 4959 scope.go:117] "RemoveContainer" containerID="92870f3c5a86654d02e09fa9f80358743dfa356a764971ac281967169d6d5de4" Jan 21 13:49:47 crc kubenswrapper[4959]: I0121 13:49:47.286861 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:49:47 crc kubenswrapper[4959]: E0121 13:49:47.287736 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:49:57 crc kubenswrapper[4959]: I0121 13:49:57.163878 4959 generic.go:334] "Generic (PLEG): container finished" podID="8ca1b87c-733e-4b60-b3f9-c8efd8c56527" containerID="2da38bf174075db2d0491d2401eac61204af76a73806106cfdb580a8aeea4c1f" exitCode=0 Jan 21 13:49:57 crc kubenswrapper[4959]: I0121 13:49:57.164509 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" event={"ID":"8ca1b87c-733e-4b60-b3f9-c8efd8c56527","Type":"ContainerDied","Data":"2da38bf174075db2d0491d2401eac61204af76a73806106cfdb580a8aeea4c1f"} Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.556876 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.720841 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ceph\") pod \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.721672 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ssh-key-openstack-edpm-ipam\") pod \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.721927 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pfxn\" (UniqueName: \"kubernetes.io/projected/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-kube-api-access-9pfxn\") pod \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.722359 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-inventory\") pod \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\" (UID: \"8ca1b87c-733e-4b60-b3f9-c8efd8c56527\") " Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.726473 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ceph" (OuterVolumeSpecName: "ceph") pod "8ca1b87c-733e-4b60-b3f9-c8efd8c56527" (UID: "8ca1b87c-733e-4b60-b3f9-c8efd8c56527"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.728015 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-kube-api-access-9pfxn" (OuterVolumeSpecName: "kube-api-access-9pfxn") pod "8ca1b87c-733e-4b60-b3f9-c8efd8c56527" (UID: "8ca1b87c-733e-4b60-b3f9-c8efd8c56527"). InnerVolumeSpecName "kube-api-access-9pfxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.756492 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ca1b87c-733e-4b60-b3f9-c8efd8c56527" (UID: "8ca1b87c-733e-4b60-b3f9-c8efd8c56527"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.757003 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-inventory" (OuterVolumeSpecName: "inventory") pod "8ca1b87c-733e-4b60-b3f9-c8efd8c56527" (UID: "8ca1b87c-733e-4b60-b3f9-c8efd8c56527"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.826079 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.826434 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.826455 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pfxn\" (UniqueName: \"kubernetes.io/projected/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-kube-api-access-9pfxn\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:58 crc kubenswrapper[4959]: I0121 13:49:58.826473 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ca1b87c-733e-4b60-b3f9-c8efd8c56527-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.180079 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" event={"ID":"8ca1b87c-733e-4b60-b3f9-c8efd8c56527","Type":"ContainerDied","Data":"82ab81d551a825a31d9d32df3145ebb0ea29dc9ca93b14c869a4c584a8953025"} Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.180161 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ab81d551a825a31d9d32df3145ebb0ea29dc9ca93b14c869a4c584a8953025" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.180131 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.263405 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf"] Jan 21 13:49:59 crc kubenswrapper[4959]: E0121 13:49:59.263791 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca1b87c-733e-4b60-b3f9-c8efd8c56527" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.263812 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca1b87c-733e-4b60-b3f9-c8efd8c56527" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.264000 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca1b87c-733e-4b60-b3f9-c8efd8c56527" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.264626 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.267683 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.268521 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.268674 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.268700 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.274473 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf"] Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.277611 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.437856 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.437956 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsz9h\" (UniqueName: \"kubernetes.io/projected/127adb14-2780-4fe5-ad99-51f928db6ab8-kube-api-access-dsz9h\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.437984 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.438031 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.540281 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.540408 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.540450 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsz9h\" (UniqueName: \"kubernetes.io/projected/127adb14-2780-4fe5-ad99-51f928db6ab8-kube-api-access-dsz9h\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.540469 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.545636 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.551801 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.552047 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.559252 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsz9h\" (UniqueName: \"kubernetes.io/projected/127adb14-2780-4fe5-ad99-51f928db6ab8-kube-api-access-dsz9h\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:49:59 crc kubenswrapper[4959]: I0121 13:49:59.582964 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:50:00 crc kubenswrapper[4959]: I0121 13:50:00.106691 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf"] Jan 21 13:50:00 crc kubenswrapper[4959]: I0121 13:50:00.189389 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" event={"ID":"127adb14-2780-4fe5-ad99-51f928db6ab8","Type":"ContainerStarted","Data":"ad926f9d9c52a2c01a144803580cb00658e07ea674cf3b958c86f310b1e7c49e"} Jan 21 13:50:01 crc kubenswrapper[4959]: I0121 13:50:01.197337 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" event={"ID":"127adb14-2780-4fe5-ad99-51f928db6ab8","Type":"ContainerStarted","Data":"7be0ff362381126eeecfc2e51d8314d29255d940cf8c3baf8ea0d5d208093f70"} Jan 21 13:50:01 crc kubenswrapper[4959]: I0121 13:50:01.217141 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" podStartSLOduration=1.7357227210000001 podStartE2EDuration="2.217120376s" podCreationTimestamp="2026-01-21 13:49:59 +0000 UTC" firstStartedPulling="2026-01-21 13:50:00.112794049 +0000 UTC m=+2461.075824592" lastFinishedPulling="2026-01-21 13:50:00.594191704 +0000 UTC m=+2461.557222247" observedRunningTime="2026-01-21 13:50:01.212693475 +0000 UTC m=+2462.175724018" watchObservedRunningTime="2026-01-21 13:50:01.217120376 +0000 UTC m=+2462.180150929" Jan 21 13:50:01 crc kubenswrapper[4959]: I0121 13:50:01.286544 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:50:01 crc kubenswrapper[4959]: E0121 13:50:01.286780 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:50:06 crc kubenswrapper[4959]: I0121 13:50:06.244438 4959 generic.go:334] "Generic (PLEG): container finished" podID="127adb14-2780-4fe5-ad99-51f928db6ab8" containerID="7be0ff362381126eeecfc2e51d8314d29255d940cf8c3baf8ea0d5d208093f70" exitCode=0 Jan 21 13:50:06 crc kubenswrapper[4959]: I0121 13:50:06.245426 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" event={"ID":"127adb14-2780-4fe5-ad99-51f928db6ab8","Type":"ContainerDied","Data":"7be0ff362381126eeecfc2e51d8314d29255d940cf8c3baf8ea0d5d208093f70"} Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.661598 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.783622 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ssh-key-openstack-edpm-ipam\") pod \"127adb14-2780-4fe5-ad99-51f928db6ab8\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.783697 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-inventory\") pod \"127adb14-2780-4fe5-ad99-51f928db6ab8\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.783766 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ceph\") pod \"127adb14-2780-4fe5-ad99-51f928db6ab8\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.783819 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsz9h\" (UniqueName: \"kubernetes.io/projected/127adb14-2780-4fe5-ad99-51f928db6ab8-kube-api-access-dsz9h\") pod \"127adb14-2780-4fe5-ad99-51f928db6ab8\" (UID: \"127adb14-2780-4fe5-ad99-51f928db6ab8\") " Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.788925 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127adb14-2780-4fe5-ad99-51f928db6ab8-kube-api-access-dsz9h" (OuterVolumeSpecName: "kube-api-access-dsz9h") pod "127adb14-2780-4fe5-ad99-51f928db6ab8" (UID: "127adb14-2780-4fe5-ad99-51f928db6ab8"). InnerVolumeSpecName "kube-api-access-dsz9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.789035 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ceph" (OuterVolumeSpecName: "ceph") pod "127adb14-2780-4fe5-ad99-51f928db6ab8" (UID: "127adb14-2780-4fe5-ad99-51f928db6ab8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.806998 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "127adb14-2780-4fe5-ad99-51f928db6ab8" (UID: "127adb14-2780-4fe5-ad99-51f928db6ab8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.808134 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-inventory" (OuterVolumeSpecName: "inventory") pod "127adb14-2780-4fe5-ad99-51f928db6ab8" (UID: "127adb14-2780-4fe5-ad99-51f928db6ab8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.885997 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.886033 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.886045 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/127adb14-2780-4fe5-ad99-51f928db6ab8-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:07 crc kubenswrapper[4959]: I0121 13:50:07.886056 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsz9h\" (UniqueName: \"kubernetes.io/projected/127adb14-2780-4fe5-ad99-51f928db6ab8-kube-api-access-dsz9h\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.266755 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" event={"ID":"127adb14-2780-4fe5-ad99-51f928db6ab8","Type":"ContainerDied","Data":"ad926f9d9c52a2c01a144803580cb00658e07ea674cf3b958c86f310b1e7c49e"} Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.266799 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad926f9d9c52a2c01a144803580cb00658e07ea674cf3b958c86f310b1e7c49e" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.266781 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.340665 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55"] Jan 21 13:50:08 crc kubenswrapper[4959]: E0121 13:50:08.341091 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127adb14-2780-4fe5-ad99-51f928db6ab8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.341183 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="127adb14-2780-4fe5-ad99-51f928db6ab8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.341401 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="127adb14-2780-4fe5-ad99-51f928db6ab8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.342008 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.345530 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.345542 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.345956 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.346115 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.346892 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.355570 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55"] Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.398410 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.398477 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.398500 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64cc\" (UniqueName: \"kubernetes.io/projected/48ab618a-8037-4dc8-ae21-b2e16c55aa47-kube-api-access-v64cc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.398620 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.499490 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.499534 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64cc\" (UniqueName: \"kubernetes.io/projected/48ab618a-8037-4dc8-ae21-b2e16c55aa47-kube-api-access-v64cc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.499625 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.499703 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.503842 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.503891 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.504853 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.522407 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64cc\" (UniqueName: \"kubernetes.io/projected/48ab618a-8037-4dc8-ae21-b2e16c55aa47-kube-api-access-v64cc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mhg55\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:08 crc kubenswrapper[4959]: I0121 13:50:08.662195 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:09 crc kubenswrapper[4959]: I0121 13:50:09.172569 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55"] Jan 21 13:50:09 crc kubenswrapper[4959]: W0121 13:50:09.177431 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ab618a_8037_4dc8_ae21_b2e16c55aa47.slice/crio-cbb07a468ced955e82bdc2c67b952de5dabbe010f564a74b842f11a98af763e7 WatchSource:0}: Error finding container cbb07a468ced955e82bdc2c67b952de5dabbe010f564a74b842f11a98af763e7: Status 404 returned error can't find the container with id cbb07a468ced955e82bdc2c67b952de5dabbe010f564a74b842f11a98af763e7 Jan 21 13:50:09 crc kubenswrapper[4959]: I0121 13:50:09.275177 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" event={"ID":"48ab618a-8037-4dc8-ae21-b2e16c55aa47","Type":"ContainerStarted","Data":"cbb07a468ced955e82bdc2c67b952de5dabbe010f564a74b842f11a98af763e7"} Jan 21 13:50:10 crc kubenswrapper[4959]: I0121 13:50:10.284460 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" event={"ID":"48ab618a-8037-4dc8-ae21-b2e16c55aa47","Type":"ContainerStarted","Data":"9e0c4f4e11fabb00bfd0cc4e5bd2e1ecac3ef81a7270c6707da6da31b94bff62"} Jan 21 13:50:10 crc kubenswrapper[4959]: I0121 13:50:10.311395 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" podStartSLOduration=1.8904181260000001 podStartE2EDuration="2.311375546s" podCreationTimestamp="2026-01-21 13:50:08 +0000 UTC" firstStartedPulling="2026-01-21 13:50:09.18042875 +0000 UTC m=+2470.143459293" lastFinishedPulling="2026-01-21 13:50:09.60138617 +0000 UTC m=+2470.564416713" observedRunningTime="2026-01-21 13:50:10.302706217 +0000 UTC m=+2471.265736790" watchObservedRunningTime="2026-01-21 13:50:10.311375546 +0000 UTC m=+2471.274406089" Jan 21 13:50:14 crc kubenswrapper[4959]: I0121 13:50:14.286288 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:50:14 crc kubenswrapper[4959]: E0121 13:50:14.286923 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:50:29 crc kubenswrapper[4959]: I0121 13:50:29.315007 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:50:29 crc kubenswrapper[4959]: E0121 13:50:29.315719 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:50:43 crc kubenswrapper[4959]: I0121 13:50:43.286475 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:50:43 crc kubenswrapper[4959]: E0121 13:50:43.287194 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:50:46 crc kubenswrapper[4959]: I0121 13:50:46.581392 4959 generic.go:334] "Generic (PLEG): container finished" podID="48ab618a-8037-4dc8-ae21-b2e16c55aa47" containerID="9e0c4f4e11fabb00bfd0cc4e5bd2e1ecac3ef81a7270c6707da6da31b94bff62" exitCode=0 Jan 21 13:50:46 crc kubenswrapper[4959]: I0121 13:50:46.581471 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" event={"ID":"48ab618a-8037-4dc8-ae21-b2e16c55aa47","Type":"ContainerDied","Data":"9e0c4f4e11fabb00bfd0cc4e5bd2e1ecac3ef81a7270c6707da6da31b94bff62"} Jan 21 13:50:47 crc kubenswrapper[4959]: I0121 13:50:47.980148 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.065444 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-inventory\") pod \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.065503 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v64cc\" (UniqueName: \"kubernetes.io/projected/48ab618a-8037-4dc8-ae21-b2e16c55aa47-kube-api-access-v64cc\") pod \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.065597 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ceph\") pod \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.065620 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ssh-key-openstack-edpm-ipam\") pod \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\" (UID: \"48ab618a-8037-4dc8-ae21-b2e16c55aa47\") " Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.071596 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ceph" (OuterVolumeSpecName: "ceph") pod "48ab618a-8037-4dc8-ae21-b2e16c55aa47" (UID: "48ab618a-8037-4dc8-ae21-b2e16c55aa47"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.071891 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ab618a-8037-4dc8-ae21-b2e16c55aa47-kube-api-access-v64cc" (OuterVolumeSpecName: "kube-api-access-v64cc") pod "48ab618a-8037-4dc8-ae21-b2e16c55aa47" (UID: "48ab618a-8037-4dc8-ae21-b2e16c55aa47"). InnerVolumeSpecName "kube-api-access-v64cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.097628 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-inventory" (OuterVolumeSpecName: "inventory") pod "48ab618a-8037-4dc8-ae21-b2e16c55aa47" (UID: "48ab618a-8037-4dc8-ae21-b2e16c55aa47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.101758 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48ab618a-8037-4dc8-ae21-b2e16c55aa47" (UID: "48ab618a-8037-4dc8-ae21-b2e16c55aa47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.168365 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.168401 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v64cc\" (UniqueName: \"kubernetes.io/projected/48ab618a-8037-4dc8-ae21-b2e16c55aa47-kube-api-access-v64cc\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.168413 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.168423 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48ab618a-8037-4dc8-ae21-b2e16c55aa47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.606025 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" event={"ID":"48ab618a-8037-4dc8-ae21-b2e16c55aa47","Type":"ContainerDied","Data":"cbb07a468ced955e82bdc2c67b952de5dabbe010f564a74b842f11a98af763e7"} Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.606419 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb07a468ced955e82bdc2c67b952de5dabbe010f564a74b842f11a98af763e7" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.606126 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mhg55" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.719697 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs"] Jan 21 13:50:48 crc kubenswrapper[4959]: E0121 13:50:48.720130 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ab618a-8037-4dc8-ae21-b2e16c55aa47" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.720150 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ab618a-8037-4dc8-ae21-b2e16c55aa47" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.720322 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ab618a-8037-4dc8-ae21-b2e16c55aa47" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.720924 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.725610 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.725848 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.725884 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.725905 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.726199 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.736329 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs"] Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.779793 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.779874 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.779937 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.779979 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7fd\" (UniqueName: \"kubernetes.io/projected/517053ca-3edf-40b9-b5a8-715d1f39c4a1-kube-api-access-fj7fd\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.881561 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.881642 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.881696 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.881734 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7fd\" (UniqueName: \"kubernetes.io/projected/517053ca-3edf-40b9-b5a8-715d1f39c4a1-kube-api-access-fj7fd\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.886603 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.888472 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.894510 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:48 crc kubenswrapper[4959]: I0121 13:50:48.903010 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7fd\" (UniqueName: \"kubernetes.io/projected/517053ca-3edf-40b9-b5a8-715d1f39c4a1-kube-api-access-fj7fd\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:49 crc kubenswrapper[4959]: I0121 13:50:49.042948 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:49 crc kubenswrapper[4959]: I0121 13:50:49.651580 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs"] Jan 21 13:50:49 crc kubenswrapper[4959]: W0121 13:50:49.662379 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod517053ca_3edf_40b9_b5a8_715d1f39c4a1.slice/crio-01feebe50588bfcb024f4d62d2f008335a083320ca5069efe8e4e0bcaa82b68d WatchSource:0}: Error finding container 01feebe50588bfcb024f4d62d2f008335a083320ca5069efe8e4e0bcaa82b68d: Status 404 returned error can't find the container with id 01feebe50588bfcb024f4d62d2f008335a083320ca5069efe8e4e0bcaa82b68d Jan 21 13:50:49 crc kubenswrapper[4959]: I0121 13:50:49.666972 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:50:50 crc kubenswrapper[4959]: I0121 13:50:50.622931 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" event={"ID":"517053ca-3edf-40b9-b5a8-715d1f39c4a1","Type":"ContainerStarted","Data":"fa41f7985338ba65a0759ccd86b928d8dd07f86e262d3e410ae0db695bcf485c"} Jan 21 13:50:50 crc kubenswrapper[4959]: I0121 13:50:50.624618 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" event={"ID":"517053ca-3edf-40b9-b5a8-715d1f39c4a1","Type":"ContainerStarted","Data":"01feebe50588bfcb024f4d62d2f008335a083320ca5069efe8e4e0bcaa82b68d"} Jan 21 13:50:50 crc kubenswrapper[4959]: I0121 13:50:50.645965 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" podStartSLOduration=2.159698261 podStartE2EDuration="2.64594913s" podCreationTimestamp="2026-01-21 13:50:48 +0000 UTC" firstStartedPulling="2026-01-21 13:50:49.666673182 +0000 UTC m=+2510.629703725" lastFinishedPulling="2026-01-21 13:50:50.152924051 +0000 UTC m=+2511.115954594" observedRunningTime="2026-01-21 13:50:50.642944821 +0000 UTC m=+2511.605975364" watchObservedRunningTime="2026-01-21 13:50:50.64594913 +0000 UTC m=+2511.608979673" Jan 21 13:50:54 crc kubenswrapper[4959]: I0121 13:50:54.658947 4959 generic.go:334] "Generic (PLEG): container finished" podID="517053ca-3edf-40b9-b5a8-715d1f39c4a1" containerID="fa41f7985338ba65a0759ccd86b928d8dd07f86e262d3e410ae0db695bcf485c" exitCode=0 Jan 21 13:50:54 crc kubenswrapper[4959]: I0121 13:50:54.659003 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" event={"ID":"517053ca-3edf-40b9-b5a8-715d1f39c4a1","Type":"ContainerDied","Data":"fa41f7985338ba65a0759ccd86b928d8dd07f86e262d3e410ae0db695bcf485c"} Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.035728 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.134473 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ssh-key-openstack-edpm-ipam\") pod \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.134575 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj7fd\" (UniqueName: \"kubernetes.io/projected/517053ca-3edf-40b9-b5a8-715d1f39c4a1-kube-api-access-fj7fd\") pod \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.134602 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-inventory\") pod \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.134690 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ceph\") pod \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\" (UID: \"517053ca-3edf-40b9-b5a8-715d1f39c4a1\") " Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.139907 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517053ca-3edf-40b9-b5a8-715d1f39c4a1-kube-api-access-fj7fd" (OuterVolumeSpecName: "kube-api-access-fj7fd") pod "517053ca-3edf-40b9-b5a8-715d1f39c4a1" (UID: "517053ca-3edf-40b9-b5a8-715d1f39c4a1"). InnerVolumeSpecName "kube-api-access-fj7fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.142378 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ceph" (OuterVolumeSpecName: "ceph") pod "517053ca-3edf-40b9-b5a8-715d1f39c4a1" (UID: "517053ca-3edf-40b9-b5a8-715d1f39c4a1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.160089 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-inventory" (OuterVolumeSpecName: "inventory") pod "517053ca-3edf-40b9-b5a8-715d1f39c4a1" (UID: "517053ca-3edf-40b9-b5a8-715d1f39c4a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.162461 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "517053ca-3edf-40b9-b5a8-715d1f39c4a1" (UID: "517053ca-3edf-40b9-b5a8-715d1f39c4a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.236891 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.236935 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.236947 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj7fd\" (UniqueName: \"kubernetes.io/projected/517053ca-3edf-40b9-b5a8-715d1f39c4a1-kube-api-access-fj7fd\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.236955 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/517053ca-3edf-40b9-b5a8-715d1f39c4a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.286039 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:50:56 crc kubenswrapper[4959]: E0121 13:50:56.286403 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.674761 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" event={"ID":"517053ca-3edf-40b9-b5a8-715d1f39c4a1","Type":"ContainerDied","Data":"01feebe50588bfcb024f4d62d2f008335a083320ca5069efe8e4e0bcaa82b68d"} Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.674816 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01feebe50588bfcb024f4d62d2f008335a083320ca5069efe8e4e0bcaa82b68d" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.674821 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.742691 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855"] Jan 21 13:50:56 crc kubenswrapper[4959]: E0121 13:50:56.743142 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517053ca-3edf-40b9-b5a8-715d1f39c4a1" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.743168 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="517053ca-3edf-40b9-b5a8-715d1f39c4a1" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.743561 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="517053ca-3edf-40b9-b5a8-715d1f39c4a1" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.744353 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.746172 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.748032 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.748345 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.748313 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.748445 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.762281 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855"] Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.847579 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.847653 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.847718 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4jr\" (UniqueName: \"kubernetes.io/projected/50dd9f09-bbb6-4caf-a0d7-19a991752a70-kube-api-access-dj4jr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.847787 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.949339 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.949638 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.949753 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj4jr\" (UniqueName: \"kubernetes.io/projected/50dd9f09-bbb6-4caf-a0d7-19a991752a70-kube-api-access-dj4jr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.949925 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.955039 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.955272 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.955625 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:56 crc kubenswrapper[4959]: I0121 13:50:56.973041 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj4jr\" (UniqueName: \"kubernetes.io/projected/50dd9f09-bbb6-4caf-a0d7-19a991752a70-kube-api-access-dj4jr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-25855\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:57 crc kubenswrapper[4959]: I0121 13:50:57.061729 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:50:57 crc kubenswrapper[4959]: I0121 13:50:57.639775 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855"] Jan 21 13:50:57 crc kubenswrapper[4959]: W0121 13:50:57.649364 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50dd9f09_bbb6_4caf_a0d7_19a991752a70.slice/crio-aaf69caff82111fff799df8c91d78400311a8d28a602a93d8c0d95360c170f30 WatchSource:0}: Error finding container aaf69caff82111fff799df8c91d78400311a8d28a602a93d8c0d95360c170f30: Status 404 returned error can't find the container with id aaf69caff82111fff799df8c91d78400311a8d28a602a93d8c0d95360c170f30 Jan 21 13:50:57 crc kubenswrapper[4959]: I0121 13:50:57.683153 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" event={"ID":"50dd9f09-bbb6-4caf-a0d7-19a991752a70","Type":"ContainerStarted","Data":"aaf69caff82111fff799df8c91d78400311a8d28a602a93d8c0d95360c170f30"} Jan 21 13:50:58 crc kubenswrapper[4959]: I0121 13:50:58.691834 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" event={"ID":"50dd9f09-bbb6-4caf-a0d7-19a991752a70","Type":"ContainerStarted","Data":"147caedff1a6d1d1c6c69315d30b9c4fe2b06b24e2a544d5399c7a8bfebd6efd"} Jan 21 13:50:58 crc kubenswrapper[4959]: I0121 13:50:58.718204 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" podStartSLOduration=2.337598312 podStartE2EDuration="2.718181383s" podCreationTimestamp="2026-01-21 13:50:56 +0000 UTC" firstStartedPulling="2026-01-21 13:50:57.646483027 +0000 UTC m=+2518.609513570" lastFinishedPulling="2026-01-21 13:50:58.027066098 +0000 UTC m=+2518.990096641" observedRunningTime="2026-01-21 13:50:58.71392087 +0000 UTC m=+2519.676951423" watchObservedRunningTime="2026-01-21 13:50:58.718181383 +0000 UTC m=+2519.681211936" Jan 21 13:51:08 crc kubenswrapper[4959]: I0121 13:51:08.287358 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:51:08 crc kubenswrapper[4959]: E0121 13:51:08.288243 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:51:22 crc kubenswrapper[4959]: I0121 13:51:22.286709 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:51:22 crc kubenswrapper[4959]: I0121 13:51:22.970846 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"0e072505b3f6e8e4daa3192ee931518b321029b68c6efcf7eab398b25eb749ee"} Jan 21 13:51:45 crc kubenswrapper[4959]: I0121 13:51:45.483589 4959 generic.go:334] "Generic (PLEG): container finished" podID="50dd9f09-bbb6-4caf-a0d7-19a991752a70" containerID="147caedff1a6d1d1c6c69315d30b9c4fe2b06b24e2a544d5399c7a8bfebd6efd" exitCode=0 Jan 21 13:51:45 crc kubenswrapper[4959]: I0121 13:51:45.483756 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" event={"ID":"50dd9f09-bbb6-4caf-a0d7-19a991752a70","Type":"ContainerDied","Data":"147caedff1a6d1d1c6c69315d30b9c4fe2b06b24e2a544d5399c7a8bfebd6efd"} Jan 21 13:51:46 crc kubenswrapper[4959]: I0121 13:51:46.903421 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.090471 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-inventory\") pod \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.091909 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ceph\") pod \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.092117 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ssh-key-openstack-edpm-ipam\") pod \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.092249 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj4jr\" (UniqueName: \"kubernetes.io/projected/50dd9f09-bbb6-4caf-a0d7-19a991752a70-kube-api-access-dj4jr\") pod \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\" (UID: \"50dd9f09-bbb6-4caf-a0d7-19a991752a70\") " Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.097486 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ceph" (OuterVolumeSpecName: "ceph") pod "50dd9f09-bbb6-4caf-a0d7-19a991752a70" (UID: "50dd9f09-bbb6-4caf-a0d7-19a991752a70"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.097843 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50dd9f09-bbb6-4caf-a0d7-19a991752a70-kube-api-access-dj4jr" (OuterVolumeSpecName: "kube-api-access-dj4jr") pod "50dd9f09-bbb6-4caf-a0d7-19a991752a70" (UID: "50dd9f09-bbb6-4caf-a0d7-19a991752a70"). InnerVolumeSpecName "kube-api-access-dj4jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.119038 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50dd9f09-bbb6-4caf-a0d7-19a991752a70" (UID: "50dd9f09-bbb6-4caf-a0d7-19a991752a70"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.124200 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-inventory" (OuterVolumeSpecName: "inventory") pod "50dd9f09-bbb6-4caf-a0d7-19a991752a70" (UID: "50dd9f09-bbb6-4caf-a0d7-19a991752a70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.194676 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.194730 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj4jr\" (UniqueName: \"kubernetes.io/projected/50dd9f09-bbb6-4caf-a0d7-19a991752a70-kube-api-access-dj4jr\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.194746 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.194759 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50dd9f09-bbb6-4caf-a0d7-19a991752a70-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.503038 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" event={"ID":"50dd9f09-bbb6-4caf-a0d7-19a991752a70","Type":"ContainerDied","Data":"aaf69caff82111fff799df8c91d78400311a8d28a602a93d8c0d95360c170f30"} Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.503076 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf69caff82111fff799df8c91d78400311a8d28a602a93d8c0d95360c170f30" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.503131 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-25855" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.585876 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7dfqd"] Jan 21 13:51:47 crc kubenswrapper[4959]: E0121 13:51:47.586352 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50dd9f09-bbb6-4caf-a0d7-19a991752a70" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.586378 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="50dd9f09-bbb6-4caf-a0d7-19a991752a70" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.586618 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="50dd9f09-bbb6-4caf-a0d7-19a991752a70" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.587400 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.590599 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.590885 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.590949 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.590896 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.593147 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.596627 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7dfqd"] Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.599984 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.600117 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ceph\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.600179 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.600246 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5v8\" (UniqueName: \"kubernetes.io/projected/7262c849-02b2-4abd-a73a-0b6ca5784de3-kube-api-access-rj5v8\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.701308 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.701420 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ceph\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.701453 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.701540 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj5v8\" (UniqueName: \"kubernetes.io/projected/7262c849-02b2-4abd-a73a-0b6ca5784de3-kube-api-access-rj5v8\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.706216 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.707371 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.711107 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ceph\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.722998 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj5v8\" (UniqueName: \"kubernetes.io/projected/7262c849-02b2-4abd-a73a-0b6ca5784de3-kube-api-access-rj5v8\") pod \"ssh-known-hosts-edpm-deployment-7dfqd\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:47 crc kubenswrapper[4959]: I0121 13:51:47.904049 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:51:48 crc kubenswrapper[4959]: I0121 13:51:48.440784 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7dfqd"] Jan 21 13:51:48 crc kubenswrapper[4959]: I0121 13:51:48.512535 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" event={"ID":"7262c849-02b2-4abd-a73a-0b6ca5784de3","Type":"ContainerStarted","Data":"8f8680d773399ece781326b78dfd3595dce22bad7976eacb334596e2314539f6"} Jan 21 13:51:49 crc kubenswrapper[4959]: I0121 13:51:49.523258 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" event={"ID":"7262c849-02b2-4abd-a73a-0b6ca5784de3","Type":"ContainerStarted","Data":"16bbd9d68ae5e2a763759b7166a2c2dc08f96680135d8f58f459c3c1486e48e9"} Jan 21 13:51:49 crc kubenswrapper[4959]: I0121 13:51:49.545785 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" podStartSLOduration=2.127084577 podStartE2EDuration="2.545757824s" podCreationTimestamp="2026-01-21 13:51:47 +0000 UTC" firstStartedPulling="2026-01-21 13:51:48.449811872 +0000 UTC m=+2569.412842415" lastFinishedPulling="2026-01-21 13:51:48.868485119 +0000 UTC m=+2569.831515662" observedRunningTime="2026-01-21 13:51:49.541120138 +0000 UTC m=+2570.504150681" watchObservedRunningTime="2026-01-21 13:51:49.545757824 +0000 UTC m=+2570.508788367" Jan 21 13:51:58 crc kubenswrapper[4959]: I0121 13:51:58.600192 4959 generic.go:334] "Generic (PLEG): container finished" podID="7262c849-02b2-4abd-a73a-0b6ca5784de3" containerID="16bbd9d68ae5e2a763759b7166a2c2dc08f96680135d8f58f459c3c1486e48e9" exitCode=0 Jan 21 13:51:58 crc kubenswrapper[4959]: I0121 13:51:58.600311 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" event={"ID":"7262c849-02b2-4abd-a73a-0b6ca5784de3","Type":"ContainerDied","Data":"16bbd9d68ae5e2a763759b7166a2c2dc08f96680135d8f58f459c3c1486e48e9"} Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.051374 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.221187 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-inventory-0\") pod \"7262c849-02b2-4abd-a73a-0b6ca5784de3\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.221258 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ceph\") pod \"7262c849-02b2-4abd-a73a-0b6ca5784de3\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.221362 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ssh-key-openstack-edpm-ipam\") pod \"7262c849-02b2-4abd-a73a-0b6ca5784de3\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.221394 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj5v8\" (UniqueName: \"kubernetes.io/projected/7262c849-02b2-4abd-a73a-0b6ca5784de3-kube-api-access-rj5v8\") pod \"7262c849-02b2-4abd-a73a-0b6ca5784de3\" (UID: \"7262c849-02b2-4abd-a73a-0b6ca5784de3\") " Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.228174 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ceph" (OuterVolumeSpecName: "ceph") pod "7262c849-02b2-4abd-a73a-0b6ca5784de3" (UID: "7262c849-02b2-4abd-a73a-0b6ca5784de3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.246294 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7262c849-02b2-4abd-a73a-0b6ca5784de3-kube-api-access-rj5v8" (OuterVolumeSpecName: "kube-api-access-rj5v8") pod "7262c849-02b2-4abd-a73a-0b6ca5784de3" (UID: "7262c849-02b2-4abd-a73a-0b6ca5784de3"). InnerVolumeSpecName "kube-api-access-rj5v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.282219 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7262c849-02b2-4abd-a73a-0b6ca5784de3" (UID: "7262c849-02b2-4abd-a73a-0b6ca5784de3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.283321 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7262c849-02b2-4abd-a73a-0b6ca5784de3" (UID: "7262c849-02b2-4abd-a73a-0b6ca5784de3"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.323091 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.323136 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.323146 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj5v8\" (UniqueName: \"kubernetes.io/projected/7262c849-02b2-4abd-a73a-0b6ca5784de3-kube-api-access-rj5v8\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.323155 4959 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7262c849-02b2-4abd-a73a-0b6ca5784de3-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.616641 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" event={"ID":"7262c849-02b2-4abd-a73a-0b6ca5784de3","Type":"ContainerDied","Data":"8f8680d773399ece781326b78dfd3595dce22bad7976eacb334596e2314539f6"} Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.616678 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f8680d773399ece781326b78dfd3595dce22bad7976eacb334596e2314539f6" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.616684 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7dfqd" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.700974 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284"] Jan 21 13:52:00 crc kubenswrapper[4959]: E0121 13:52:00.701455 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7262c849-02b2-4abd-a73a-0b6ca5784de3" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.701477 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7262c849-02b2-4abd-a73a-0b6ca5784de3" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.701667 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7262c849-02b2-4abd-a73a-0b6ca5784de3" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.702435 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.706146 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.706405 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.706580 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.706635 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.706586 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.724321 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284"] Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.830697 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.830858 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.830891 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.830987 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscc5\" (UniqueName: \"kubernetes.io/projected/fbcf281a-ccb1-4740-a8f9-06dcdba80445-kube-api-access-cscc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.932276 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.932342 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.932377 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscc5\" (UniqueName: \"kubernetes.io/projected/fbcf281a-ccb1-4740-a8f9-06dcdba80445-kube-api-access-cscc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.932492 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.936367 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.936401 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.936609 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:00 crc kubenswrapper[4959]: I0121 13:52:00.949533 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscc5\" (UniqueName: \"kubernetes.io/projected/fbcf281a-ccb1-4740-a8f9-06dcdba80445-kube-api-access-cscc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-55284\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:01 crc kubenswrapper[4959]: I0121 13:52:01.025731 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:01 crc kubenswrapper[4959]: I0121 13:52:01.567784 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284"] Jan 21 13:52:01 crc kubenswrapper[4959]: W0121 13:52:01.579335 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbcf281a_ccb1_4740_a8f9_06dcdba80445.slice/crio-a6269a6128e49fc31e10a7a84abc7363b9f818905064be28fd2321f5ac396419 WatchSource:0}: Error finding container a6269a6128e49fc31e10a7a84abc7363b9f818905064be28fd2321f5ac396419: Status 404 returned error can't find the container with id a6269a6128e49fc31e10a7a84abc7363b9f818905064be28fd2321f5ac396419 Jan 21 13:52:01 crc kubenswrapper[4959]: I0121 13:52:01.625303 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" event={"ID":"fbcf281a-ccb1-4740-a8f9-06dcdba80445","Type":"ContainerStarted","Data":"a6269a6128e49fc31e10a7a84abc7363b9f818905064be28fd2321f5ac396419"} Jan 21 13:52:02 crc kubenswrapper[4959]: I0121 13:52:02.641251 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" event={"ID":"fbcf281a-ccb1-4740-a8f9-06dcdba80445","Type":"ContainerStarted","Data":"5e10ba8485de79ece7a5f2d89566a0ba6795b7d776e94e6332484b59996ede8b"} Jan 21 13:52:02 crc kubenswrapper[4959]: I0121 13:52:02.672979 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" podStartSLOduration=2.284205088 podStartE2EDuration="2.672958521s" podCreationTimestamp="2026-01-21 13:52:00 +0000 UTC" firstStartedPulling="2026-01-21 13:52:01.581543292 +0000 UTC m=+2582.544573835" lastFinishedPulling="2026-01-21 13:52:01.970296725 +0000 UTC m=+2582.933327268" observedRunningTime="2026-01-21 13:52:02.660039739 +0000 UTC m=+2583.623070282" watchObservedRunningTime="2026-01-21 13:52:02.672958521 +0000 UTC m=+2583.635989064" Jan 21 13:52:09 crc kubenswrapper[4959]: I0121 13:52:09.696129 4959 generic.go:334] "Generic (PLEG): container finished" podID="fbcf281a-ccb1-4740-a8f9-06dcdba80445" containerID="5e10ba8485de79ece7a5f2d89566a0ba6795b7d776e94e6332484b59996ede8b" exitCode=0 Jan 21 13:52:09 crc kubenswrapper[4959]: I0121 13:52:09.696215 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" event={"ID":"fbcf281a-ccb1-4740-a8f9-06dcdba80445","Type":"ContainerDied","Data":"5e10ba8485de79ece7a5f2d89566a0ba6795b7d776e94e6332484b59996ede8b"} Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.069414 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.213173 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ssh-key-openstack-edpm-ipam\") pod \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.213354 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-inventory\") pod \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.213394 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ceph\") pod \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.213451 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cscc5\" (UniqueName: \"kubernetes.io/projected/fbcf281a-ccb1-4740-a8f9-06dcdba80445-kube-api-access-cscc5\") pod \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\" (UID: \"fbcf281a-ccb1-4740-a8f9-06dcdba80445\") " Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.219380 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcf281a-ccb1-4740-a8f9-06dcdba80445-kube-api-access-cscc5" (OuterVolumeSpecName: "kube-api-access-cscc5") pod "fbcf281a-ccb1-4740-a8f9-06dcdba80445" (UID: "fbcf281a-ccb1-4740-a8f9-06dcdba80445"). InnerVolumeSpecName "kube-api-access-cscc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.222923 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ceph" (OuterVolumeSpecName: "ceph") pod "fbcf281a-ccb1-4740-a8f9-06dcdba80445" (UID: "fbcf281a-ccb1-4740-a8f9-06dcdba80445"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.238404 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbcf281a-ccb1-4740-a8f9-06dcdba80445" (UID: "fbcf281a-ccb1-4740-a8f9-06dcdba80445"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.238625 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-inventory" (OuterVolumeSpecName: "inventory") pod "fbcf281a-ccb1-4740-a8f9-06dcdba80445" (UID: "fbcf281a-ccb1-4740-a8f9-06dcdba80445"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.315391 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.315424 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.315434 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cscc5\" (UniqueName: \"kubernetes.io/projected/fbcf281a-ccb1-4740-a8f9-06dcdba80445-kube-api-access-cscc5\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.315444 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf281a-ccb1-4740-a8f9-06dcdba80445-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.713190 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" event={"ID":"fbcf281a-ccb1-4740-a8f9-06dcdba80445","Type":"ContainerDied","Data":"a6269a6128e49fc31e10a7a84abc7363b9f818905064be28fd2321f5ac396419"} Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.713248 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6269a6128e49fc31e10a7a84abc7363b9f818905064be28fd2321f5ac396419" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.714115 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-55284" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.787447 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc"] Jan 21 13:52:11 crc kubenswrapper[4959]: E0121 13:52:11.787906 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcf281a-ccb1-4740-a8f9-06dcdba80445" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.787932 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcf281a-ccb1-4740-a8f9-06dcdba80445" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.788178 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcf281a-ccb1-4740-a8f9-06dcdba80445" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.788920 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.790744 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.790764 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.791178 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.791182 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.793387 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.805988 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc"] Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.928637 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvvd\" (UniqueName: \"kubernetes.io/projected/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-kube-api-access-qqvvd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.929014 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.929209 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:11 crc kubenswrapper[4959]: I0121 13:52:11.929367 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.030558 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.030618 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.030655 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.030691 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvvd\" (UniqueName: \"kubernetes.io/projected/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-kube-api-access-qqvvd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.035145 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.035374 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.044692 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.046261 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvvd\" (UniqueName: \"kubernetes.io/projected/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-kube-api-access-qqvvd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.105971 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.642025 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc"] Jan 21 13:52:12 crc kubenswrapper[4959]: I0121 13:52:12.724716 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" event={"ID":"1a205c16-e1de-4ea3-af09-c17d2daf0bdf","Type":"ContainerStarted","Data":"fdb3aaa592511c590d06caec0b5ceaee59d949cc1c314f81f349376644c83451"} Jan 21 13:52:13 crc kubenswrapper[4959]: I0121 13:52:13.736171 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" event={"ID":"1a205c16-e1de-4ea3-af09-c17d2daf0bdf","Type":"ContainerStarted","Data":"586cdebba23c9daa430249cc6e8db4980709eafe1873f03190ca49993f043129"} Jan 21 13:52:13 crc kubenswrapper[4959]: I0121 13:52:13.755799 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" podStartSLOduration=2.273977485 podStartE2EDuration="2.755780683s" podCreationTimestamp="2026-01-21 13:52:11 +0000 UTC" firstStartedPulling="2026-01-21 13:52:12.650791715 +0000 UTC m=+2593.613822258" lastFinishedPulling="2026-01-21 13:52:13.132594913 +0000 UTC m=+2594.095625456" observedRunningTime="2026-01-21 13:52:13.74981228 +0000 UTC m=+2594.712842823" watchObservedRunningTime="2026-01-21 13:52:13.755780683 +0000 UTC m=+2594.718811226" Jan 21 13:52:23 crc kubenswrapper[4959]: I0121 13:52:23.820936 4959 generic.go:334] "Generic (PLEG): container finished" podID="1a205c16-e1de-4ea3-af09-c17d2daf0bdf" containerID="586cdebba23c9daa430249cc6e8db4980709eafe1873f03190ca49993f043129" exitCode=0 Jan 21 13:52:23 crc kubenswrapper[4959]: I0121 13:52:23.821065 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" event={"ID":"1a205c16-e1de-4ea3-af09-c17d2daf0bdf","Type":"ContainerDied","Data":"586cdebba23c9daa430249cc6e8db4980709eafe1873f03190ca49993f043129"} Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.244129 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.265653 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ssh-key-openstack-edpm-ipam\") pod \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.265710 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-inventory\") pod \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.265805 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ceph\") pod \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.265831 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvvd\" (UniqueName: \"kubernetes.io/projected/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-kube-api-access-qqvvd\") pod \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\" (UID: \"1a205c16-e1de-4ea3-af09-c17d2daf0bdf\") " Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.277547 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-kube-api-access-qqvvd" (OuterVolumeSpecName: "kube-api-access-qqvvd") pod "1a205c16-e1de-4ea3-af09-c17d2daf0bdf" (UID: "1a205c16-e1de-4ea3-af09-c17d2daf0bdf"). InnerVolumeSpecName "kube-api-access-qqvvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.280824 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ceph" (OuterVolumeSpecName: "ceph") pod "1a205c16-e1de-4ea3-af09-c17d2daf0bdf" (UID: "1a205c16-e1de-4ea3-af09-c17d2daf0bdf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.302247 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-inventory" (OuterVolumeSpecName: "inventory") pod "1a205c16-e1de-4ea3-af09-c17d2daf0bdf" (UID: "1a205c16-e1de-4ea3-af09-c17d2daf0bdf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.305326 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a205c16-e1de-4ea3-af09-c17d2daf0bdf" (UID: "1a205c16-e1de-4ea3-af09-c17d2daf0bdf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.367404 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvvd\" (UniqueName: \"kubernetes.io/projected/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-kube-api-access-qqvvd\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.367543 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.367785 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.367849 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1a205c16-e1de-4ea3-af09-c17d2daf0bdf-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.837831 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" event={"ID":"1a205c16-e1de-4ea3-af09-c17d2daf0bdf","Type":"ContainerDied","Data":"fdb3aaa592511c590d06caec0b5ceaee59d949cc1c314f81f349376644c83451"} Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.838107 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb3aaa592511c590d06caec0b5ceaee59d949cc1c314f81f349376644c83451" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.837931 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.919072 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl"] Jan 21 13:52:25 crc kubenswrapper[4959]: E0121 13:52:25.919545 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a205c16-e1de-4ea3-af09-c17d2daf0bdf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.919568 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a205c16-e1de-4ea3-af09-c17d2daf0bdf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.919757 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a205c16-e1de-4ea3-af09-c17d2daf0bdf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.920408 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.922438 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.922611 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.923292 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.923640 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.924295 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.924583 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.924586 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.928770 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 13:52:25 crc kubenswrapper[4959]: I0121 13:52:25.935859 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl"] Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.077726 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.077811 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.077845 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99pn5\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-kube-api-access-99pn5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.077886 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078384 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078479 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078571 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078679 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078771 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078871 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.078977 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.079122 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.079217 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.180737 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181165 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181205 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99pn5\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-kube-api-access-99pn5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181243 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181280 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181297 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181327 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181351 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181370 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181390 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181409 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181434 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.181455 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.185391 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.185467 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.185797 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.185972 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.186066 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.186470 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.186662 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.187546 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.187699 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.188370 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.188485 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.190884 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.197930 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99pn5\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-kube-api-access-99pn5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mncfl\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.247305 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.827927 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl"] Jan 21 13:52:26 crc kubenswrapper[4959]: I0121 13:52:26.849594 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" event={"ID":"6d4c9b96-2baa-4fa3-92f4-b263c4123fec","Type":"ContainerStarted","Data":"effafb1c9811a16860a40174ce8039cabc4d317fabcfd0c387cfdf51f1d78526"} Jan 21 13:52:27 crc kubenswrapper[4959]: I0121 13:52:27.859627 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" event={"ID":"6d4c9b96-2baa-4fa3-92f4-b263c4123fec","Type":"ContainerStarted","Data":"6c3879c39e6f6e6d2ffb581e00a9faa59349bb19bd5558ad3db3533e38f0ea1c"} Jan 21 13:52:27 crc kubenswrapper[4959]: I0121 13:52:27.890454 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" podStartSLOduration=2.398511437 podStartE2EDuration="2.890433571s" podCreationTimestamp="2026-01-21 13:52:25 +0000 UTC" firstStartedPulling="2026-01-21 13:52:26.835447504 +0000 UTC m=+2607.798478047" lastFinishedPulling="2026-01-21 13:52:27.327369638 +0000 UTC m=+2608.290400181" observedRunningTime="2026-01-21 13:52:27.885931518 +0000 UTC m=+2608.848962071" watchObservedRunningTime="2026-01-21 13:52:27.890433571 +0000 UTC m=+2608.853464114" Jan 21 13:53:02 crc kubenswrapper[4959]: I0121 13:53:02.258223 4959 generic.go:334] "Generic (PLEG): container finished" podID="6d4c9b96-2baa-4fa3-92f4-b263c4123fec" containerID="6c3879c39e6f6e6d2ffb581e00a9faa59349bb19bd5558ad3db3533e38f0ea1c" exitCode=0 Jan 21 13:53:02 crc kubenswrapper[4959]: I0121 13:53:02.258330 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" event={"ID":"6d4c9b96-2baa-4fa3-92f4-b263c4123fec","Type":"ContainerDied","Data":"6c3879c39e6f6e6d2ffb581e00a9faa59349bb19bd5558ad3db3533e38f0ea1c"} Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.732535 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896046 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-bootstrap-combined-ca-bundle\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896108 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99pn5\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-kube-api-access-99pn5\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896134 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ovn-combined-ca-bundle\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896179 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ssh-key-openstack-edpm-ipam\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896209 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-ovn-default-certs-0\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896247 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ceph\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896280 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896311 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-neutron-metadata-combined-ca-bundle\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896352 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896441 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-inventory\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896461 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-repo-setup-combined-ca-bundle\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896490 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-nova-combined-ca-bundle\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.896508 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-libvirt-combined-ca-bundle\") pod \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\" (UID: \"6d4c9b96-2baa-4fa3-92f4-b263c4123fec\") " Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.909530 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-kube-api-access-99pn5" (OuterVolumeSpecName: "kube-api-access-99pn5") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "kube-api-access-99pn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.911224 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.913276 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.915286 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.916288 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.918380 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.918517 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.932983 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ceph" (OuterVolumeSpecName: "ceph") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.933299 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.933336 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.944226 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.952246 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-inventory" (OuterVolumeSpecName: "inventory") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.969700 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d4c9b96-2baa-4fa3-92f4-b263c4123fec" (UID: "6d4c9b96-2baa-4fa3-92f4-b263c4123fec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998239 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998292 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998304 4959 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998316 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998327 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998337 4959 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998346 4959 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998354 4959 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998363 4959 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998373 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99pn5\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-kube-api-access-99pn5\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998383 4959 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998392 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:03 crc kubenswrapper[4959]: I0121 13:53:03.998402 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6d4c9b96-2baa-4fa3-92f4-b263c4123fec-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.279777 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" event={"ID":"6d4c9b96-2baa-4fa3-92f4-b263c4123fec","Type":"ContainerDied","Data":"effafb1c9811a16860a40174ce8039cabc4d317fabcfd0c387cfdf51f1d78526"} Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.279816 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="effafb1c9811a16860a40174ce8039cabc4d317fabcfd0c387cfdf51f1d78526" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.279865 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mncfl" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.391966 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2"] Jan 21 13:53:04 crc kubenswrapper[4959]: E0121 13:53:04.392557 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4c9b96-2baa-4fa3-92f4-b263c4123fec" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.392591 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4c9b96-2baa-4fa3-92f4-b263c4123fec" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.392823 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4c9b96-2baa-4fa3-92f4-b263c4123fec" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.393594 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.403405 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2"] Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.436590 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.436636 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.436647 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.436677 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.436967 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.539030 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.539081 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.539713 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwfsh\" (UniqueName: \"kubernetes.io/projected/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-kube-api-access-vwfsh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.539966 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.641723 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwfsh\" (UniqueName: \"kubernetes.io/projected/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-kube-api-access-vwfsh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.642071 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.642141 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.642176 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.652627 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.654857 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.655404 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.666889 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwfsh\" (UniqueName: \"kubernetes.io/projected/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-kube-api-access-vwfsh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:04 crc kubenswrapper[4959]: I0121 13:53:04.754069 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:05 crc kubenswrapper[4959]: I0121 13:53:05.314743 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2"] Jan 21 13:53:06 crc kubenswrapper[4959]: I0121 13:53:06.297681 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" event={"ID":"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7","Type":"ContainerStarted","Data":"43ccac8854354ed91e567c4d97f76293f83848bf66dc276f4c1ec8aa30c4ed5c"} Jan 21 13:53:06 crc kubenswrapper[4959]: I0121 13:53:06.299077 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" event={"ID":"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7","Type":"ContainerStarted","Data":"5d2a2a34b7a888c82fd80e6611e96fa1839f07ebbb684f42866cd194fb95832f"} Jan 21 13:53:06 crc kubenswrapper[4959]: I0121 13:53:06.322151 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" podStartSLOduration=1.765147173 podStartE2EDuration="2.322134799s" podCreationTimestamp="2026-01-21 13:53:04 +0000 UTC" firstStartedPulling="2026-01-21 13:53:05.291842386 +0000 UTC m=+2646.254872929" lastFinishedPulling="2026-01-21 13:53:05.848830012 +0000 UTC m=+2646.811860555" observedRunningTime="2026-01-21 13:53:06.316601098 +0000 UTC m=+2647.279631651" watchObservedRunningTime="2026-01-21 13:53:06.322134799 +0000 UTC m=+2647.285165342" Jan 21 13:53:12 crc kubenswrapper[4959]: I0121 13:53:12.355362 4959 generic.go:334] "Generic (PLEG): container finished" podID="56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" containerID="43ccac8854354ed91e567c4d97f76293f83848bf66dc276f4c1ec8aa30c4ed5c" exitCode=0 Jan 21 13:53:12 crc kubenswrapper[4959]: I0121 13:53:12.355488 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" event={"ID":"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7","Type":"ContainerDied","Data":"43ccac8854354ed91e567c4d97f76293f83848bf66dc276f4c1ec8aa30c4ed5c"} Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.833938 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.913903 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-inventory\") pod \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.914426 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwfsh\" (UniqueName: \"kubernetes.io/projected/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-kube-api-access-vwfsh\") pod \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.914604 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ceph\") pod \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.914758 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ssh-key-openstack-edpm-ipam\") pod \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\" (UID: \"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7\") " Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.919536 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-kube-api-access-vwfsh" (OuterVolumeSpecName: "kube-api-access-vwfsh") pod "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" (UID: "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7"). InnerVolumeSpecName "kube-api-access-vwfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.920012 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ceph" (OuterVolumeSpecName: "ceph") pod "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" (UID: "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.937614 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" (UID: "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:13 crc kubenswrapper[4959]: I0121 13:53:13.939068 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-inventory" (OuterVolumeSpecName: "inventory") pod "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" (UID: "56c1b89e-0983-4452-b6ff-ddc66c8dcfc7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.017228 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwfsh\" (UniqueName: \"kubernetes.io/projected/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-kube-api-access-vwfsh\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.017259 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.017270 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.017281 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c1b89e-0983-4452-b6ff-ddc66c8dcfc7-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.375489 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" event={"ID":"56c1b89e-0983-4452-b6ff-ddc66c8dcfc7","Type":"ContainerDied","Data":"5d2a2a34b7a888c82fd80e6611e96fa1839f07ebbb684f42866cd194fb95832f"} Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.375543 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d2a2a34b7a888c82fd80e6611e96fa1839f07ebbb684f42866cd194fb95832f" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.375572 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.663151 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp"] Jan 21 13:53:14 crc kubenswrapper[4959]: E0121 13:53:14.663509 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.663528 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.663719 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c1b89e-0983-4452-b6ff-ddc66c8dcfc7" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.664323 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.666815 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.667246 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.667284 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.667429 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.678550 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.678991 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.679822 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp"] Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.749448 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.749501 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.749537 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.749579 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.749700 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.749759 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqvn8\" (UniqueName: \"kubernetes.io/projected/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-kube-api-access-vqvn8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.851264 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.851402 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.851462 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqvn8\" (UniqueName: \"kubernetes.io/projected/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-kube-api-access-vqvn8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.851589 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.851621 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.851684 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.852737 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.862184 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.863275 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.863692 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.864334 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.873648 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqvn8\" (UniqueName: \"kubernetes.io/projected/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-kube-api-access-vqvn8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5l5sp\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:14 crc kubenswrapper[4959]: I0121 13:53:14.982246 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:53:15 crc kubenswrapper[4959]: I0121 13:53:15.545562 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp"] Jan 21 13:53:16 crc kubenswrapper[4959]: I0121 13:53:16.430402 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" event={"ID":"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49","Type":"ContainerStarted","Data":"d966698833eef88dc64ce187b7a18d593eff2af84f1a9dd46b84df98a5923f7b"} Jan 21 13:53:16 crc kubenswrapper[4959]: I0121 13:53:16.431134 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" event={"ID":"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49","Type":"ContainerStarted","Data":"5dd0af475679b7b1d4c8574f8020b382a307b3a77493ca0186743c9714de1b48"} Jan 21 13:53:16 crc kubenswrapper[4959]: I0121 13:53:16.456974 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" podStartSLOduration=2.056343294 podStartE2EDuration="2.45695393s" podCreationTimestamp="2026-01-21 13:53:14 +0000 UTC" firstStartedPulling="2026-01-21 13:53:15.557361578 +0000 UTC m=+2656.520392131" lastFinishedPulling="2026-01-21 13:53:15.957972224 +0000 UTC m=+2656.921002767" observedRunningTime="2026-01-21 13:53:16.452050576 +0000 UTC m=+2657.415081149" watchObservedRunningTime="2026-01-21 13:53:16.45695393 +0000 UTC m=+2657.419984473" Jan 21 13:53:51 crc kubenswrapper[4959]: I0121 13:53:51.379588 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:53:51 crc kubenswrapper[4959]: I0121 13:53:51.380336 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.307016 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bqtzw"] Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.309555 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.321790 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqtzw"] Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.425072 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m4zw\" (UniqueName: \"kubernetes.io/projected/4badabeb-595a-4157-b0ae-678c939e92f3-kube-api-access-2m4zw\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.425801 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-catalog-content\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.426004 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-utilities\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.527331 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m4zw\" (UniqueName: \"kubernetes.io/projected/4badabeb-595a-4157-b0ae-678c939e92f3-kube-api-access-2m4zw\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.527442 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-catalog-content\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.527619 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-utilities\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.527931 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-catalog-content\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.528269 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-utilities\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.550271 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m4zw\" (UniqueName: \"kubernetes.io/projected/4badabeb-595a-4157-b0ae-678c939e92f3-kube-api-access-2m4zw\") pod \"redhat-operators-bqtzw\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:13 crc kubenswrapper[4959]: I0121 13:54:13.636777 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:14 crc kubenswrapper[4959]: I0121 13:54:14.143368 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqtzw"] Jan 21 13:54:14 crc kubenswrapper[4959]: I0121 13:54:14.925902 4959 generic.go:334] "Generic (PLEG): container finished" podID="4badabeb-595a-4157-b0ae-678c939e92f3" containerID="c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e" exitCode=0 Jan 21 13:54:14 crc kubenswrapper[4959]: I0121 13:54:14.925952 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerDied","Data":"c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e"} Jan 21 13:54:14 crc kubenswrapper[4959]: I0121 13:54:14.927790 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerStarted","Data":"e0d806f2a5924d9cb70ff962446ce5ff586504dde6e35a9cac19de3bc01dc02c"} Jan 21 13:54:16 crc kubenswrapper[4959]: I0121 13:54:16.947985 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerStarted","Data":"91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c"} Jan 21 13:54:17 crc kubenswrapper[4959]: I0121 13:54:17.966443 4959 generic.go:334] "Generic (PLEG): container finished" podID="4badabeb-595a-4157-b0ae-678c939e92f3" containerID="91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c" exitCode=0 Jan 21 13:54:17 crc kubenswrapper[4959]: I0121 13:54:17.966499 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerDied","Data":"91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c"} Jan 21 13:54:18 crc kubenswrapper[4959]: I0121 13:54:18.976040 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerStarted","Data":"168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a"} Jan 21 13:54:19 crc kubenswrapper[4959]: I0121 13:54:19.003534 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bqtzw" podStartSLOduration=2.429241441 podStartE2EDuration="6.003515502s" podCreationTimestamp="2026-01-21 13:54:13 +0000 UTC" firstStartedPulling="2026-01-21 13:54:14.927618456 +0000 UTC m=+2715.890648999" lastFinishedPulling="2026-01-21 13:54:18.501892517 +0000 UTC m=+2719.464923060" observedRunningTime="2026-01-21 13:54:18.994569829 +0000 UTC m=+2719.957600372" watchObservedRunningTime="2026-01-21 13:54:19.003515502 +0000 UTC m=+2719.966546045" Jan 21 13:54:21 crc kubenswrapper[4959]: I0121 13:54:21.380292 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:54:21 crc kubenswrapper[4959]: I0121 13:54:21.380632 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:54:23 crc kubenswrapper[4959]: I0121 13:54:23.636941 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:23 crc kubenswrapper[4959]: I0121 13:54:23.637488 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:24 crc kubenswrapper[4959]: I0121 13:54:24.694825 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bqtzw" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="registry-server" probeResult="failure" output=< Jan 21 13:54:24 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 13:54:24 crc kubenswrapper[4959]: > Jan 21 13:54:29 crc kubenswrapper[4959]: I0121 13:54:29.053035 4959 generic.go:334] "Generic (PLEG): container finished" podID="eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" containerID="d966698833eef88dc64ce187b7a18d593eff2af84f1a9dd46b84df98a5923f7b" exitCode=0 Jan 21 13:54:29 crc kubenswrapper[4959]: I0121 13:54:29.053089 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" event={"ID":"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49","Type":"ContainerDied","Data":"d966698833eef88dc64ce187b7a18d593eff2af84f1a9dd46b84df98a5923f7b"} Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.528041 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.655729 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-inventory\") pod \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.656050 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovn-combined-ca-bundle\") pod \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.656120 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovncontroller-config-0\") pod \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.656197 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ceph\") pod \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.656265 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqvn8\" (UniqueName: \"kubernetes.io/projected/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-kube-api-access-vqvn8\") pod \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.656288 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ssh-key-openstack-edpm-ipam\") pod \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\" (UID: \"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49\") " Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.662474 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ceph" (OuterVolumeSpecName: "ceph") pod "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" (UID: "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.668768 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" (UID: "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.668849 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-kube-api-access-vqvn8" (OuterVolumeSpecName: "kube-api-access-vqvn8") pod "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" (UID: "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49"). InnerVolumeSpecName "kube-api-access-vqvn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.681658 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" (UID: "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.687066 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-inventory" (OuterVolumeSpecName: "inventory") pod "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" (UID: "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.693766 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" (UID: "eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.758383 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.758420 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.758434 4959 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.758446 4959 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.758457 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:30 crc kubenswrapper[4959]: I0121 13:54:30.758469 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqvn8\" (UniqueName: \"kubernetes.io/projected/eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49-kube-api-access-vqvn8\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.073899 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" event={"ID":"eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49","Type":"ContainerDied","Data":"5dd0af475679b7b1d4c8574f8020b382a307b3a77493ca0186743c9714de1b48"} Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.074250 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd0af475679b7b1d4c8574f8020b382a307b3a77493ca0186743c9714de1b48" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.073956 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5l5sp" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.170960 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x"] Jan 21 13:54:31 crc kubenswrapper[4959]: E0121 13:54:31.171582 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.171665 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.171903 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.172548 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.174718 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.174971 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.175320 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.175420 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.175521 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.175794 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.185173 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.185243 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x"] Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.267888 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.267947 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.267976 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.268015 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.268038 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xskh\" (UniqueName: \"kubernetes.io/projected/14765967-d282-4405-8ad4-03c801137ed7-kube-api-access-5xskh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.268059 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.268137 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370042 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370112 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xskh\" (UniqueName: \"kubernetes.io/projected/14765967-d282-4405-8ad4-03c801137ed7-kube-api-access-5xskh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370141 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370207 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370275 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370313 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.370339 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.374689 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.375517 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.375851 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.378183 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.379438 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.382806 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.397682 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xskh\" (UniqueName: \"kubernetes.io/projected/14765967-d282-4405-8ad4-03c801137ed7-kube-api-access-5xskh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:31 crc kubenswrapper[4959]: I0121 13:54:31.500925 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:54:32 crc kubenswrapper[4959]: I0121 13:54:32.009086 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x"] Jan 21 13:54:32 crc kubenswrapper[4959]: I0121 13:54:32.081660 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" event={"ID":"14765967-d282-4405-8ad4-03c801137ed7","Type":"ContainerStarted","Data":"7df82e76c830021b844ab1b41477bfadc550f4cfd36ca2105c41cb779c569e0f"} Jan 21 13:54:33 crc kubenswrapper[4959]: I0121 13:54:33.090523 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" event={"ID":"14765967-d282-4405-8ad4-03c801137ed7","Type":"ContainerStarted","Data":"00a27ee77ff5597e6586a7566b03075f58e1e1a92ab0f9daec3b00602f6cd526"} Jan 21 13:54:33 crc kubenswrapper[4959]: I0121 13:54:33.111267 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" podStartSLOduration=1.69090404 podStartE2EDuration="2.111250663s" podCreationTimestamp="2026-01-21 13:54:31 +0000 UTC" firstStartedPulling="2026-01-21 13:54:32.012772604 +0000 UTC m=+2732.975803147" lastFinishedPulling="2026-01-21 13:54:32.433119217 +0000 UTC m=+2733.396149770" observedRunningTime="2026-01-21 13:54:33.106457774 +0000 UTC m=+2734.069488327" watchObservedRunningTime="2026-01-21 13:54:33.111250663 +0000 UTC m=+2734.074281206" Jan 21 13:54:33 crc kubenswrapper[4959]: I0121 13:54:33.690540 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:33 crc kubenswrapper[4959]: I0121 13:54:33.757582 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:33 crc kubenswrapper[4959]: I0121 13:54:33.932755 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqtzw"] Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.107239 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bqtzw" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="registry-server" containerID="cri-o://168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a" gracePeriod=2 Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.606547 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.766217 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-catalog-content\") pod \"4badabeb-595a-4157-b0ae-678c939e92f3\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.766319 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-utilities\") pod \"4badabeb-595a-4157-b0ae-678c939e92f3\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.766421 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m4zw\" (UniqueName: \"kubernetes.io/projected/4badabeb-595a-4157-b0ae-678c939e92f3-kube-api-access-2m4zw\") pod \"4badabeb-595a-4157-b0ae-678c939e92f3\" (UID: \"4badabeb-595a-4157-b0ae-678c939e92f3\") " Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.767856 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-utilities" (OuterVolumeSpecName: "utilities") pod "4badabeb-595a-4157-b0ae-678c939e92f3" (UID: "4badabeb-595a-4157-b0ae-678c939e92f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.781459 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4badabeb-595a-4157-b0ae-678c939e92f3-kube-api-access-2m4zw" (OuterVolumeSpecName: "kube-api-access-2m4zw") pod "4badabeb-595a-4157-b0ae-678c939e92f3" (UID: "4badabeb-595a-4157-b0ae-678c939e92f3"). InnerVolumeSpecName "kube-api-access-2m4zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.869072 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m4zw\" (UniqueName: \"kubernetes.io/projected/4badabeb-595a-4157-b0ae-678c939e92f3-kube-api-access-2m4zw\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.869131 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.893388 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4badabeb-595a-4157-b0ae-678c939e92f3" (UID: "4badabeb-595a-4157-b0ae-678c939e92f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:54:35 crc kubenswrapper[4959]: I0121 13:54:35.970567 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4badabeb-595a-4157-b0ae-678c939e92f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.115839 4959 generic.go:334] "Generic (PLEG): container finished" podID="4badabeb-595a-4157-b0ae-678c939e92f3" containerID="168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a" exitCode=0 Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.115885 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerDied","Data":"168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a"} Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.115913 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqtzw" event={"ID":"4badabeb-595a-4157-b0ae-678c939e92f3","Type":"ContainerDied","Data":"e0d806f2a5924d9cb70ff962446ce5ff586504dde6e35a9cac19de3bc01dc02c"} Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.115933 4959 scope.go:117] "RemoveContainer" containerID="168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.115937 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqtzw" Jan 21 13:54:36 crc kubenswrapper[4959]: E0121 13:54:36.120300 4959 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/redhat-operators-bqtzw_openshift-marketplace_registry-server-168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a.log: no such file or directory" path="/var/log/containers/redhat-operators-bqtzw_openshift-marketplace_registry-server-168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a.log" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.140591 4959 scope.go:117] "RemoveContainer" containerID="91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.151241 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqtzw"] Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.165064 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bqtzw"] Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.175439 4959 scope.go:117] "RemoveContainer" containerID="c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.229387 4959 scope.go:117] "RemoveContainer" containerID="168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a" Jan 21 13:54:36 crc kubenswrapper[4959]: E0121 13:54:36.233189 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a\": container with ID starting with 168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a not found: ID does not exist" containerID="168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.233225 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a"} err="failed to get container status \"168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a\": rpc error: code = NotFound desc = could not find container \"168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a\": container with ID starting with 168a0e60de7f0da669c1f3b0e756a652b3db208db3e9c12ac7673778807af54a not found: ID does not exist" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.233247 4959 scope.go:117] "RemoveContainer" containerID="91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c" Jan 21 13:54:36 crc kubenswrapper[4959]: E0121 13:54:36.237184 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c\": container with ID starting with 91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c not found: ID does not exist" containerID="91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.237219 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c"} err="failed to get container status \"91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c\": rpc error: code = NotFound desc = could not find container \"91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c\": container with ID starting with 91ac1be55babd231be7ba25dfb366fa5794288cd1c2199ad41f99e5990a2140c not found: ID does not exist" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.237243 4959 scope.go:117] "RemoveContainer" containerID="c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e" Jan 21 13:54:36 crc kubenswrapper[4959]: E0121 13:54:36.242633 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e\": container with ID starting with c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e not found: ID does not exist" containerID="c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e" Jan 21 13:54:36 crc kubenswrapper[4959]: I0121 13:54:36.242673 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e"} err="failed to get container status \"c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e\": rpc error: code = NotFound desc = could not find container \"c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e\": container with ID starting with c50416a9049a912fa415c80a9949296e8a3d0b5e8446fb665ce905724247248e not found: ID does not exist" Jan 21 13:54:37 crc kubenswrapper[4959]: I0121 13:54:37.296208 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" path="/var/lib/kubelet/pods/4badabeb-595a-4157-b0ae-678c939e92f3/volumes" Jan 21 13:54:51 crc kubenswrapper[4959]: I0121 13:54:51.379382 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:54:51 crc kubenswrapper[4959]: I0121 13:54:51.379998 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:54:51 crc kubenswrapper[4959]: I0121 13:54:51.380040 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:54:51 crc kubenswrapper[4959]: I0121 13:54:51.380676 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e072505b3f6e8e4daa3192ee931518b321029b68c6efcf7eab398b25eb749ee"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:54:51 crc kubenswrapper[4959]: I0121 13:54:51.380718 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://0e072505b3f6e8e4daa3192ee931518b321029b68c6efcf7eab398b25eb749ee" gracePeriod=600 Jan 21 13:54:52 crc kubenswrapper[4959]: I0121 13:54:52.237268 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"0e072505b3f6e8e4daa3192ee931518b321029b68c6efcf7eab398b25eb749ee"} Jan 21 13:54:52 crc kubenswrapper[4959]: I0121 13:54:52.237377 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="0e072505b3f6e8e4daa3192ee931518b321029b68c6efcf7eab398b25eb749ee" exitCode=0 Jan 21 13:54:52 crc kubenswrapper[4959]: I0121 13:54:52.237720 4959 scope.go:117] "RemoveContainer" containerID="0d7017178a444a9754aa9b7bf445bf2508119add464b45632db3369fca046a91" Jan 21 13:54:52 crc kubenswrapper[4959]: I0121 13:54:52.237737 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e"} Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.179662 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsfw2"] Jan 21 13:55:24 crc kubenswrapper[4959]: E0121 13:55:24.180613 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="extract-utilities" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.180629 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="extract-utilities" Jan 21 13:55:24 crc kubenswrapper[4959]: E0121 13:55:24.180656 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="extract-content" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.180663 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="extract-content" Jan 21 13:55:24 crc kubenswrapper[4959]: E0121 13:55:24.180690 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="registry-server" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.180698 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="registry-server" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.180877 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="4badabeb-595a-4157-b0ae-678c939e92f3" containerName="registry-server" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.182074 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.193352 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsfw2"] Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.307814 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-utilities\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.308202 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9vg\" (UniqueName: \"kubernetes.io/projected/527a53d9-45b3-48ef-8adf-ecab726cdd17-kube-api-access-nr9vg\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.308312 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-catalog-content\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.410917 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9vg\" (UniqueName: \"kubernetes.io/projected/527a53d9-45b3-48ef-8adf-ecab726cdd17-kube-api-access-nr9vg\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.410984 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-catalog-content\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.411084 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-utilities\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.412557 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-utilities\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.412637 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-catalog-content\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.430361 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9vg\" (UniqueName: \"kubernetes.io/projected/527a53d9-45b3-48ef-8adf-ecab726cdd17-kube-api-access-nr9vg\") pod \"redhat-marketplace-nsfw2\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:24 crc kubenswrapper[4959]: I0121 13:55:24.505120 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:25 crc kubenswrapper[4959]: I0121 13:55:25.028636 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsfw2"] Jan 21 13:55:25 crc kubenswrapper[4959]: I0121 13:55:25.493057 4959 generic.go:334] "Generic (PLEG): container finished" podID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerID="f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06" exitCode=0 Jan 21 13:55:25 crc kubenswrapper[4959]: I0121 13:55:25.493196 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsfw2" event={"ID":"527a53d9-45b3-48ef-8adf-ecab726cdd17","Type":"ContainerDied","Data":"f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06"} Jan 21 13:55:25 crc kubenswrapper[4959]: I0121 13:55:25.493451 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsfw2" event={"ID":"527a53d9-45b3-48ef-8adf-ecab726cdd17","Type":"ContainerStarted","Data":"cfeb34aa503223d4e11bdcedb5ef13b8ca821515558226d3013524ec039a4f08"} Jan 21 13:55:26 crc kubenswrapper[4959]: I0121 13:55:26.506746 4959 generic.go:334] "Generic (PLEG): container finished" podID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerID="7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a" exitCode=0 Jan 21 13:55:26 crc kubenswrapper[4959]: I0121 13:55:26.506995 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsfw2" event={"ID":"527a53d9-45b3-48ef-8adf-ecab726cdd17","Type":"ContainerDied","Data":"7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a"} Jan 21 13:55:27 crc kubenswrapper[4959]: I0121 13:55:27.518290 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsfw2" event={"ID":"527a53d9-45b3-48ef-8adf-ecab726cdd17","Type":"ContainerStarted","Data":"5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4"} Jan 21 13:55:27 crc kubenswrapper[4959]: I0121 13:55:27.538314 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsfw2" podStartSLOduration=2.115714755 podStartE2EDuration="3.538295326s" podCreationTimestamp="2026-01-21 13:55:24 +0000 UTC" firstStartedPulling="2026-01-21 13:55:25.497123586 +0000 UTC m=+2786.460154129" lastFinishedPulling="2026-01-21 13:55:26.919704157 +0000 UTC m=+2787.882734700" observedRunningTime="2026-01-21 13:55:27.534423481 +0000 UTC m=+2788.497454024" watchObservedRunningTime="2026-01-21 13:55:27.538295326 +0000 UTC m=+2788.501325869" Jan 21 13:55:34 crc kubenswrapper[4959]: I0121 13:55:34.505649 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:34 crc kubenswrapper[4959]: I0121 13:55:34.506315 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:34 crc kubenswrapper[4959]: I0121 13:55:34.566823 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:34 crc kubenswrapper[4959]: I0121 13:55:34.627001 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:34 crc kubenswrapper[4959]: I0121 13:55:34.804078 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsfw2"] Jan 21 13:55:35 crc kubenswrapper[4959]: I0121 13:55:35.622511 4959 generic.go:334] "Generic (PLEG): container finished" podID="14765967-d282-4405-8ad4-03c801137ed7" containerID="00a27ee77ff5597e6586a7566b03075f58e1e1a92ab0f9daec3b00602f6cd526" exitCode=0 Jan 21 13:55:35 crc kubenswrapper[4959]: I0121 13:55:35.622593 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" event={"ID":"14765967-d282-4405-8ad4-03c801137ed7","Type":"ContainerDied","Data":"00a27ee77ff5597e6586a7566b03075f58e1e1a92ab0f9daec3b00602f6cd526"} Jan 21 13:55:36 crc kubenswrapper[4959]: I0121 13:55:36.630344 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsfw2" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="registry-server" containerID="cri-o://5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4" gracePeriod=2 Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.155465 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.162053 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.340762 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-nova-metadata-neutron-config-0\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.340820 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ssh-key-openstack-edpm-ipam\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.340917 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr9vg\" (UniqueName: \"kubernetes.io/projected/527a53d9-45b3-48ef-8adf-ecab726cdd17-kube-api-access-nr9vg\") pod \"527a53d9-45b3-48ef-8adf-ecab726cdd17\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341182 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-catalog-content\") pod \"527a53d9-45b3-48ef-8adf-ecab726cdd17\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341229 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-metadata-combined-ca-bundle\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341258 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-utilities\") pod \"527a53d9-45b3-48ef-8adf-ecab726cdd17\" (UID: \"527a53d9-45b3-48ef-8adf-ecab726cdd17\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341312 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ceph\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341337 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xskh\" (UniqueName: \"kubernetes.io/projected/14765967-d282-4405-8ad4-03c801137ed7-kube-api-access-5xskh\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341365 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.341400 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-inventory\") pod \"14765967-d282-4405-8ad4-03c801137ed7\" (UID: \"14765967-d282-4405-8ad4-03c801137ed7\") " Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.342469 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-utilities" (OuterVolumeSpecName: "utilities") pod "527a53d9-45b3-48ef-8adf-ecab726cdd17" (UID: "527a53d9-45b3-48ef-8adf-ecab726cdd17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.352905 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527a53d9-45b3-48ef-8adf-ecab726cdd17-kube-api-access-nr9vg" (OuterVolumeSpecName: "kube-api-access-nr9vg") pod "527a53d9-45b3-48ef-8adf-ecab726cdd17" (UID: "527a53d9-45b3-48ef-8adf-ecab726cdd17"). InnerVolumeSpecName "kube-api-access-nr9vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.353186 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14765967-d282-4405-8ad4-03c801137ed7-kube-api-access-5xskh" (OuterVolumeSpecName: "kube-api-access-5xskh") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "kube-api-access-5xskh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.353241 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.353286 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ceph" (OuterVolumeSpecName: "ceph") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.371807 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-inventory" (OuterVolumeSpecName: "inventory") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.373084 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.374009 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.375708 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14765967-d282-4405-8ad4-03c801137ed7" (UID: "14765967-d282-4405-8ad4-03c801137ed7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.378569 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "527a53d9-45b3-48ef-8adf-ecab726cdd17" (UID: "527a53d9-45b3-48ef-8adf-ecab726cdd17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.443889 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444153 4959 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444165 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527a53d9-45b3-48ef-8adf-ecab726cdd17-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444178 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444187 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xskh\" (UniqueName: \"kubernetes.io/projected/14765967-d282-4405-8ad4-03c801137ed7-kube-api-access-5xskh\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444199 4959 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444210 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444220 4959 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444229 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14765967-d282-4405-8ad4-03c801137ed7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.444238 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr9vg\" (UniqueName: \"kubernetes.io/projected/527a53d9-45b3-48ef-8adf-ecab726cdd17-kube-api-access-nr9vg\") on node \"crc\" DevicePath \"\"" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.640673 4959 generic.go:334] "Generic (PLEG): container finished" podID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerID="5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4" exitCode=0 Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.640826 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsfw2" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.641720 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsfw2" event={"ID":"527a53d9-45b3-48ef-8adf-ecab726cdd17","Type":"ContainerDied","Data":"5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4"} Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.641830 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsfw2" event={"ID":"527a53d9-45b3-48ef-8adf-ecab726cdd17","Type":"ContainerDied","Data":"cfeb34aa503223d4e11bdcedb5ef13b8ca821515558226d3013524ec039a4f08"} Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.641898 4959 scope.go:117] "RemoveContainer" containerID="5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.644222 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" event={"ID":"14765967-d282-4405-8ad4-03c801137ed7","Type":"ContainerDied","Data":"7df82e76c830021b844ab1b41477bfadc550f4cfd36ca2105c41cb779c569e0f"} Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.644256 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df82e76c830021b844ab1b41477bfadc550f4cfd36ca2105c41cb779c569e0f" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.644316 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.669261 4959 scope.go:117] "RemoveContainer" containerID="7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.692542 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsfw2"] Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.707994 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsfw2"] Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.719514 4959 scope.go:117] "RemoveContainer" containerID="f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.767227 4959 scope.go:117] "RemoveContainer" containerID="5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4" Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.767626 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4\": container with ID starting with 5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4 not found: ID does not exist" containerID="5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.767670 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4"} err="failed to get container status \"5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4\": rpc error: code = NotFound desc = could not find container \"5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4\": container with ID starting with 5f5ba926eb33ee5b52d497b874e236a60d66f46ec61743f7763974975d5ef0f4 not found: ID does not exist" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.767700 4959 scope.go:117] "RemoveContainer" containerID="7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a" Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.767906 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a\": container with ID starting with 7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a not found: ID does not exist" containerID="7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.767932 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a"} err="failed to get container status \"7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a\": rpc error: code = NotFound desc = could not find container \"7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a\": container with ID starting with 7e5b54d1f661c34f4c503c831ae25e9414f5f6b3dac2069fa99dfd5824128f3a not found: ID does not exist" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.767946 4959 scope.go:117] "RemoveContainer" containerID="f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06" Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.768144 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06\": container with ID starting with f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06 not found: ID does not exist" containerID="f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.768174 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06"} err="failed to get container status \"f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06\": rpc error: code = NotFound desc = could not find container \"f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06\": container with ID starting with f1045c2a219fd5dd38fbd1a1e965ce0ea9a08e83ee38977d9da24070ea188f06 not found: ID does not exist" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775164 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8"] Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.775633 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="extract-content" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775656 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="extract-content" Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.775666 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="registry-server" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775674 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="registry-server" Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.775689 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14765967-d282-4405-8ad4-03c801137ed7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775701 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="14765967-d282-4405-8ad4-03c801137ed7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 13:55:37 crc kubenswrapper[4959]: E0121 13:55:37.775718 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="extract-utilities" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775726 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="extract-utilities" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775971 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="14765967-d282-4405-8ad4-03c801137ed7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.775985 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" containerName="registry-server" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.776979 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.782671 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.782891 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.783121 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.783258 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.783383 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.783592 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.790402 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8"] Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.951977 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.952043 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.952199 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.952274 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.952299 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:37 crc kubenswrapper[4959]: I0121 13:55:37.952367 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxzh\" (UniqueName: \"kubernetes.io/projected/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-kube-api-access-5bxzh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.056731 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.056847 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.056899 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxzh\" (UniqueName: \"kubernetes.io/projected/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-kube-api-access-5bxzh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.056961 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.057007 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.057157 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.069677 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.074543 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.078485 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.079256 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.080912 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.081645 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxzh\" (UniqueName: \"kubernetes.io/projected/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-kube-api-access-5bxzh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.136368 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 13:55:38 crc kubenswrapper[4959]: I0121 13:55:38.689692 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8"] Jan 21 13:55:39 crc kubenswrapper[4959]: I0121 13:55:39.301709 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527a53d9-45b3-48ef-8adf-ecab726cdd17" path="/var/lib/kubelet/pods/527a53d9-45b3-48ef-8adf-ecab726cdd17/volumes" Jan 21 13:55:39 crc kubenswrapper[4959]: I0121 13:55:39.662419 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" event={"ID":"7f2ea3fd-7ce6-4792-b694-c174b9dd1475","Type":"ContainerStarted","Data":"e1ab0c55633034a24f7b3415453d952d8787b395da179bdd3c4832306f8d0f1d"} Jan 21 13:55:39 crc kubenswrapper[4959]: I0121 13:55:39.662690 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" event={"ID":"7f2ea3fd-7ce6-4792-b694-c174b9dd1475","Type":"ContainerStarted","Data":"3a5d0eb0089c67eee5ab1c7d98c161250fe6fd8564ace1e23046e8a1c54baf6d"} Jan 21 13:55:39 crc kubenswrapper[4959]: I0121 13:55:39.681776 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" podStartSLOduration=2.16968055 podStartE2EDuration="2.681754582s" podCreationTimestamp="2026-01-21 13:55:37 +0000 UTC" firstStartedPulling="2026-01-21 13:55:38.690494054 +0000 UTC m=+2799.653524597" lastFinishedPulling="2026-01-21 13:55:39.202568086 +0000 UTC m=+2800.165598629" observedRunningTime="2026-01-21 13:55:39.680176959 +0000 UTC m=+2800.643207512" watchObservedRunningTime="2026-01-21 13:55:39.681754582 +0000 UTC m=+2800.644785125" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.445532 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zdqgz"] Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.448676 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.468377 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zdqgz"] Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.509497 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47h4v\" (UniqueName: \"kubernetes.io/projected/87772ae0-593b-4063-b030-70d1c09f316e-kube-api-access-47h4v\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.509546 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-catalog-content\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.509644 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-utilities\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.611626 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47h4v\" (UniqueName: \"kubernetes.io/projected/87772ae0-593b-4063-b030-70d1c09f316e-kube-api-access-47h4v\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.611728 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-catalog-content\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.611843 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-utilities\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.612451 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-catalog-content\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.612786 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-utilities\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.635601 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47h4v\" (UniqueName: \"kubernetes.io/projected/87772ae0-593b-4063-b030-70d1c09f316e-kube-api-access-47h4v\") pod \"certified-operators-zdqgz\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:35 crc kubenswrapper[4959]: I0121 13:56:35.784025 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:36 crc kubenswrapper[4959]: I0121 13:56:36.283559 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zdqgz"] Jan 21 13:56:37 crc kubenswrapper[4959]: I0121 13:56:37.215604 4959 generic.go:334] "Generic (PLEG): container finished" podID="87772ae0-593b-4063-b030-70d1c09f316e" containerID="22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499" exitCode=0 Jan 21 13:56:37 crc kubenswrapper[4959]: I0121 13:56:37.215753 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerDied","Data":"22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499"} Jan 21 13:56:37 crc kubenswrapper[4959]: I0121 13:56:37.215964 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerStarted","Data":"6f333c8eec2c0e43cef3d461815f6d3f105d67d8bf7a8a0dac58466f493e5425"} Jan 21 13:56:37 crc kubenswrapper[4959]: I0121 13:56:37.217122 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:56:38 crc kubenswrapper[4959]: I0121 13:56:38.224847 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerStarted","Data":"894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73"} Jan 21 13:56:39 crc kubenswrapper[4959]: I0121 13:56:39.245464 4959 generic.go:334] "Generic (PLEG): container finished" podID="87772ae0-593b-4063-b030-70d1c09f316e" containerID="894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73" exitCode=0 Jan 21 13:56:39 crc kubenswrapper[4959]: I0121 13:56:39.245787 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerDied","Data":"894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73"} Jan 21 13:56:40 crc kubenswrapper[4959]: I0121 13:56:40.263002 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerStarted","Data":"bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e"} Jan 21 13:56:40 crc kubenswrapper[4959]: I0121 13:56:40.287688 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zdqgz" podStartSLOduration=2.83885773 podStartE2EDuration="5.287674367s" podCreationTimestamp="2026-01-21 13:56:35 +0000 UTC" firstStartedPulling="2026-01-21 13:56:37.216900148 +0000 UTC m=+2858.179930691" lastFinishedPulling="2026-01-21 13:56:39.665716785 +0000 UTC m=+2860.628747328" observedRunningTime="2026-01-21 13:56:40.279433863 +0000 UTC m=+2861.242464416" watchObservedRunningTime="2026-01-21 13:56:40.287674367 +0000 UTC m=+2861.250704910" Jan 21 13:56:45 crc kubenswrapper[4959]: I0121 13:56:45.784401 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:45 crc kubenswrapper[4959]: I0121 13:56:45.784930 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:45 crc kubenswrapper[4959]: I0121 13:56:45.848870 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:46 crc kubenswrapper[4959]: I0121 13:56:46.369964 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:46 crc kubenswrapper[4959]: I0121 13:56:46.431977 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zdqgz"] Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.325553 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zdqgz" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="registry-server" containerID="cri-o://bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e" gracePeriod=2 Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.720600 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.866615 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-utilities\") pod \"87772ae0-593b-4063-b030-70d1c09f316e\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.866843 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47h4v\" (UniqueName: \"kubernetes.io/projected/87772ae0-593b-4063-b030-70d1c09f316e-kube-api-access-47h4v\") pod \"87772ae0-593b-4063-b030-70d1c09f316e\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.866873 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-catalog-content\") pod \"87772ae0-593b-4063-b030-70d1c09f316e\" (UID: \"87772ae0-593b-4063-b030-70d1c09f316e\") " Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.868426 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-utilities" (OuterVolumeSpecName: "utilities") pod "87772ae0-593b-4063-b030-70d1c09f316e" (UID: "87772ae0-593b-4063-b030-70d1c09f316e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.874963 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87772ae0-593b-4063-b030-70d1c09f316e-kube-api-access-47h4v" (OuterVolumeSpecName: "kube-api-access-47h4v") pod "87772ae0-593b-4063-b030-70d1c09f316e" (UID: "87772ae0-593b-4063-b030-70d1c09f316e"). InnerVolumeSpecName "kube-api-access-47h4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.925405 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87772ae0-593b-4063-b030-70d1c09f316e" (UID: "87772ae0-593b-4063-b030-70d1c09f316e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.968416 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47h4v\" (UniqueName: \"kubernetes.io/projected/87772ae0-593b-4063-b030-70d1c09f316e-kube-api-access-47h4v\") on node \"crc\" DevicePath \"\"" Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.968452 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:56:48 crc kubenswrapper[4959]: I0121 13:56:48.968490 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87772ae0-593b-4063-b030-70d1c09f316e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.339085 4959 generic.go:334] "Generic (PLEG): container finished" podID="87772ae0-593b-4063-b030-70d1c09f316e" containerID="bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e" exitCode=0 Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.339209 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerDied","Data":"bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e"} Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.339264 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdqgz" event={"ID":"87772ae0-593b-4063-b030-70d1c09f316e","Type":"ContainerDied","Data":"6f333c8eec2c0e43cef3d461815f6d3f105d67d8bf7a8a0dac58466f493e5425"} Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.339301 4959 scope.go:117] "RemoveContainer" containerID="bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.339565 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdqgz" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.376018 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zdqgz"] Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.384443 4959 scope.go:117] "RemoveContainer" containerID="894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.387043 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zdqgz"] Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.401864 4959 scope.go:117] "RemoveContainer" containerID="22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.451835 4959 scope.go:117] "RemoveContainer" containerID="bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e" Jan 21 13:56:49 crc kubenswrapper[4959]: E0121 13:56:49.452712 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e\": container with ID starting with bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e not found: ID does not exist" containerID="bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.452745 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e"} err="failed to get container status \"bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e\": rpc error: code = NotFound desc = could not find container \"bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e\": container with ID starting with bf36734252912df192a42bb989da8426be758f41d30db7d46fdaee0005db143e not found: ID does not exist" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.452767 4959 scope.go:117] "RemoveContainer" containerID="894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73" Jan 21 13:56:49 crc kubenswrapper[4959]: E0121 13:56:49.453280 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73\": container with ID starting with 894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73 not found: ID does not exist" containerID="894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.453300 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73"} err="failed to get container status \"894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73\": rpc error: code = NotFound desc = could not find container \"894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73\": container with ID starting with 894e27b8521e6fb9b89f36f274ae96d5ba1a4fb85d9c26eb60e50e648da54e73 not found: ID does not exist" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.453313 4959 scope.go:117] "RemoveContainer" containerID="22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499" Jan 21 13:56:49 crc kubenswrapper[4959]: E0121 13:56:49.453676 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499\": container with ID starting with 22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499 not found: ID does not exist" containerID="22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499" Jan 21 13:56:49 crc kubenswrapper[4959]: I0121 13:56:49.453699 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499"} err="failed to get container status \"22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499\": rpc error: code = NotFound desc = could not find container \"22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499\": container with ID starting with 22541d4589ac1308afb638570d2e24f3c2c5dc434b78f27b6314d3d0f364f499 not found: ID does not exist" Jan 21 13:56:51 crc kubenswrapper[4959]: I0121 13:56:51.297545 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87772ae0-593b-4063-b030-70d1c09f316e" path="/var/lib/kubelet/pods/87772ae0-593b-4063-b030-70d1c09f316e/volumes" Jan 21 13:56:51 crc kubenswrapper[4959]: I0121 13:56:51.380064 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:56:51 crc kubenswrapper[4959]: I0121 13:56:51.380134 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:57:21 crc kubenswrapper[4959]: I0121 13:57:21.380809 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:57:21 crc kubenswrapper[4959]: I0121 13:57:21.381310 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.379375 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.379951 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.380006 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.380899 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.381034 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" gracePeriod=600 Jan 21 13:57:51 crc kubenswrapper[4959]: E0121 13:57:51.505879 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.889350 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" exitCode=0 Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.889406 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e"} Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.889468 4959 scope.go:117] "RemoveContainer" containerID="0e072505b3f6e8e4daa3192ee931518b321029b68c6efcf7eab398b25eb749ee" Jan 21 13:57:51 crc kubenswrapper[4959]: I0121 13:57:51.890002 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:57:51 crc kubenswrapper[4959]: E0121 13:57:51.890302 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:58:06 crc kubenswrapper[4959]: I0121 13:58:06.286538 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:58:06 crc kubenswrapper[4959]: E0121 13:58:06.287422 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:58:17 crc kubenswrapper[4959]: I0121 13:58:17.286768 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:58:17 crc kubenswrapper[4959]: E0121 13:58:17.287661 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:58:30 crc kubenswrapper[4959]: I0121 13:58:30.286624 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:58:30 crc kubenswrapper[4959]: E0121 13:58:30.287460 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:58:43 crc kubenswrapper[4959]: I0121 13:58:43.286651 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:58:43 crc kubenswrapper[4959]: E0121 13:58:43.288757 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:58:56 crc kubenswrapper[4959]: I0121 13:58:56.286685 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:58:59 crc kubenswrapper[4959]: E0121 13:58:56.287804 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:59:10 crc kubenswrapper[4959]: I0121 13:59:10.285922 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:59:10 crc kubenswrapper[4959]: E0121 13:59:10.286625 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:59:25 crc kubenswrapper[4959]: I0121 13:59:25.286892 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:59:25 crc kubenswrapper[4959]: E0121 13:59:25.287844 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:59:38 crc kubenswrapper[4959]: I0121 13:59:38.287357 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:59:38 crc kubenswrapper[4959]: E0121 13:59:38.288322 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 13:59:53 crc kubenswrapper[4959]: I0121 13:59:53.286166 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 13:59:53 crc kubenswrapper[4959]: E0121 13:59:53.287060 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.159575 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk"] Jan 21 14:00:00 crc kubenswrapper[4959]: E0121 14:00:00.160724 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="extract-utilities" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.160746 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="extract-utilities" Jan 21 14:00:00 crc kubenswrapper[4959]: E0121 14:00:00.160776 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="registry-server" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.160784 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="registry-server" Jan 21 14:00:00 crc kubenswrapper[4959]: E0121 14:00:00.160818 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="extract-content" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.160826 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="extract-content" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.161126 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="87772ae0-593b-4063-b030-70d1c09f316e" containerName="registry-server" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.161890 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.165239 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.165285 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.173288 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk"] Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.296351 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1910a694-845b-4ccb-8377-ef5d3a7d947f-config-volume\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.296491 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1910a694-845b-4ccb-8377-ef5d3a7d947f-secret-volume\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.296689 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvpz8\" (UniqueName: \"kubernetes.io/projected/1910a694-845b-4ccb-8377-ef5d3a7d947f-kube-api-access-lvpz8\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.398760 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvpz8\" (UniqueName: \"kubernetes.io/projected/1910a694-845b-4ccb-8377-ef5d3a7d947f-kube-api-access-lvpz8\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.398861 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1910a694-845b-4ccb-8377-ef5d3a7d947f-config-volume\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.398911 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1910a694-845b-4ccb-8377-ef5d3a7d947f-secret-volume\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.400310 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1910a694-845b-4ccb-8377-ef5d3a7d947f-config-volume\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.416213 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1910a694-845b-4ccb-8377-ef5d3a7d947f-secret-volume\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.428481 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvpz8\" (UniqueName: \"kubernetes.io/projected/1910a694-845b-4ccb-8377-ef5d3a7d947f-kube-api-access-lvpz8\") pod \"collect-profiles-29483400-6fvxk\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.483648 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:00 crc kubenswrapper[4959]: I0121 14:00:00.933769 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk"] Jan 21 14:00:01 crc kubenswrapper[4959]: I0121 14:00:01.054685 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" event={"ID":"1910a694-845b-4ccb-8377-ef5d3a7d947f","Type":"ContainerStarted","Data":"01ca0256d6ada85f7d2cff9e963a1530583a7a672afc214bbc409102428f6d4b"} Jan 21 14:00:02 crc kubenswrapper[4959]: I0121 14:00:02.074878 4959 generic.go:334] "Generic (PLEG): container finished" podID="1910a694-845b-4ccb-8377-ef5d3a7d947f" containerID="282c68508799439f63377d829daef2b3a04eaf0b70565da712b1fab3907597cc" exitCode=0 Jan 21 14:00:02 crc kubenswrapper[4959]: I0121 14:00:02.075298 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" event={"ID":"1910a694-845b-4ccb-8377-ef5d3a7d947f","Type":"ContainerDied","Data":"282c68508799439f63377d829daef2b3a04eaf0b70565da712b1fab3907597cc"} Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.395744 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.563275 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1910a694-845b-4ccb-8377-ef5d3a7d947f-config-volume\") pod \"1910a694-845b-4ccb-8377-ef5d3a7d947f\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.563624 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1910a694-845b-4ccb-8377-ef5d3a7d947f-secret-volume\") pod \"1910a694-845b-4ccb-8377-ef5d3a7d947f\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.563730 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvpz8\" (UniqueName: \"kubernetes.io/projected/1910a694-845b-4ccb-8377-ef5d3a7d947f-kube-api-access-lvpz8\") pod \"1910a694-845b-4ccb-8377-ef5d3a7d947f\" (UID: \"1910a694-845b-4ccb-8377-ef5d3a7d947f\") " Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.564351 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1910a694-845b-4ccb-8377-ef5d3a7d947f-config-volume" (OuterVolumeSpecName: "config-volume") pod "1910a694-845b-4ccb-8377-ef5d3a7d947f" (UID: "1910a694-845b-4ccb-8377-ef5d3a7d947f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.570850 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1910a694-845b-4ccb-8377-ef5d3a7d947f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1910a694-845b-4ccb-8377-ef5d3a7d947f" (UID: "1910a694-845b-4ccb-8377-ef5d3a7d947f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.572145 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1910a694-845b-4ccb-8377-ef5d3a7d947f-kube-api-access-lvpz8" (OuterVolumeSpecName: "kube-api-access-lvpz8") pod "1910a694-845b-4ccb-8377-ef5d3a7d947f" (UID: "1910a694-845b-4ccb-8377-ef5d3a7d947f"). InnerVolumeSpecName "kube-api-access-lvpz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.665897 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1910a694-845b-4ccb-8377-ef5d3a7d947f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.665947 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvpz8\" (UniqueName: \"kubernetes.io/projected/1910a694-845b-4ccb-8377-ef5d3a7d947f-kube-api-access-lvpz8\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:03 crc kubenswrapper[4959]: I0121 14:00:03.665957 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1910a694-845b-4ccb-8377-ef5d3a7d947f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:04 crc kubenswrapper[4959]: I0121 14:00:04.091637 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" event={"ID":"1910a694-845b-4ccb-8377-ef5d3a7d947f","Type":"ContainerDied","Data":"01ca0256d6ada85f7d2cff9e963a1530583a7a672afc214bbc409102428f6d4b"} Jan 21 14:00:04 crc kubenswrapper[4959]: I0121 14:00:04.091675 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ca0256d6ada85f7d2cff9e963a1530583a7a672afc214bbc409102428f6d4b" Jan 21 14:00:04 crc kubenswrapper[4959]: I0121 14:00:04.091721 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-6fvxk" Jan 21 14:00:04 crc kubenswrapper[4959]: I0121 14:00:04.470515 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw"] Jan 21 14:00:04 crc kubenswrapper[4959]: I0121 14:00:04.479443 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-wq7fw"] Jan 21 14:00:05 crc kubenswrapper[4959]: I0121 14:00:05.289322 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:00:05 crc kubenswrapper[4959]: E0121 14:00:05.289693 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:00:05 crc kubenswrapper[4959]: I0121 14:00:05.299178 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b6db65-10de-4913-a8f6-b9040c016760" path="/var/lib/kubelet/pods/82b6db65-10de-4913-a8f6-b9040c016760/volumes" Jan 21 14:00:19 crc kubenswrapper[4959]: I0121 14:00:19.292811 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:00:19 crc kubenswrapper[4959]: E0121 14:00:19.293562 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:00:30 crc kubenswrapper[4959]: I0121 14:00:30.291546 4959 generic.go:334] "Generic (PLEG): container finished" podID="7f2ea3fd-7ce6-4792-b694-c174b9dd1475" containerID="e1ab0c55633034a24f7b3415453d952d8787b395da179bdd3c4832306f8d0f1d" exitCode=0 Jan 21 14:00:30 crc kubenswrapper[4959]: I0121 14:00:30.291629 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" event={"ID":"7f2ea3fd-7ce6-4792-b694-c174b9dd1475","Type":"ContainerDied","Data":"e1ab0c55633034a24f7b3415453d952d8787b395da179bdd3c4832306f8d0f1d"} Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.286980 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:00:31 crc kubenswrapper[4959]: E0121 14:00:31.287522 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.696530 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.779346 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ssh-key-openstack-edpm-ipam\") pod \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.779430 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-inventory\") pod \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.779502 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ceph\") pod \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.779568 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-secret-0\") pod \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.779629 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-combined-ca-bundle\") pod \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.779682 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bxzh\" (UniqueName: \"kubernetes.io/projected/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-kube-api-access-5bxzh\") pod \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\" (UID: \"7f2ea3fd-7ce6-4792-b694-c174b9dd1475\") " Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.786116 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ceph" (OuterVolumeSpecName: "ceph") pod "7f2ea3fd-7ce6-4792-b694-c174b9dd1475" (UID: "7f2ea3fd-7ce6-4792-b694-c174b9dd1475"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.786314 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7f2ea3fd-7ce6-4792-b694-c174b9dd1475" (UID: "7f2ea3fd-7ce6-4792-b694-c174b9dd1475"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.789312 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-kube-api-access-5bxzh" (OuterVolumeSpecName: "kube-api-access-5bxzh") pod "7f2ea3fd-7ce6-4792-b694-c174b9dd1475" (UID: "7f2ea3fd-7ce6-4792-b694-c174b9dd1475"). InnerVolumeSpecName "kube-api-access-5bxzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.806693 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7f2ea3fd-7ce6-4792-b694-c174b9dd1475" (UID: "7f2ea3fd-7ce6-4792-b694-c174b9dd1475"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.806975 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-inventory" (OuterVolumeSpecName: "inventory") pod "7f2ea3fd-7ce6-4792-b694-c174b9dd1475" (UID: "7f2ea3fd-7ce6-4792-b694-c174b9dd1475"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.808923 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f2ea3fd-7ce6-4792-b694-c174b9dd1475" (UID: "7f2ea3fd-7ce6-4792-b694-c174b9dd1475"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.881732 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.881763 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.881773 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.881782 4959 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.881790 4959 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:31 crc kubenswrapper[4959]: I0121 14:00:31.881800 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bxzh\" (UniqueName: \"kubernetes.io/projected/7f2ea3fd-7ce6-4792-b694-c174b9dd1475-kube-api-access-5bxzh\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:32 crc kubenswrapper[4959]: I0121 14:00:32.306744 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" event={"ID":"7f2ea3fd-7ce6-4792-b694-c174b9dd1475","Type":"ContainerDied","Data":"3a5d0eb0089c67eee5ab1c7d98c161250fe6fd8564ace1e23046e8a1c54baf6d"} Jan 21 14:00:32 crc kubenswrapper[4959]: I0121 14:00:32.306774 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8" Jan 21 14:00:32 crc kubenswrapper[4959]: I0121 14:00:32.306790 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a5d0eb0089c67eee5ab1c7d98c161250fe6fd8564ace1e23046e8a1c54baf6d" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.639378 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq"] Jan 21 14:00:35 crc kubenswrapper[4959]: E0121 14:00:35.640381 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1910a694-845b-4ccb-8377-ef5d3a7d947f" containerName="collect-profiles" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.640398 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="1910a694-845b-4ccb-8377-ef5d3a7d947f" containerName="collect-profiles" Jan 21 14:00:35 crc kubenswrapper[4959]: E0121 14:00:35.640429 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2ea3fd-7ce6-4792-b694-c174b9dd1475" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.640439 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2ea3fd-7ce6-4792-b694-c174b9dd1475" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.640650 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="1910a694-845b-4ccb-8377-ef5d3a7d947f" containerName="collect-profiles" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.640677 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2ea3fd-7ce6-4792-b694-c174b9dd1475" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.647564 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.657777 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.658029 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.658235 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hdk9f" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.658896 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.659036 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.659082 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.659254 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.659044 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.659261 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.715954 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq"] Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.858720 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859127 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859251 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859365 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859559 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859683 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859812 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.859917 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grqxs\" (UniqueName: \"kubernetes.io/projected/d259192c-0b25-4615-b3c1-473a23e9facf-kube-api-access-grqxs\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.860036 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.860200 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.860311 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961131 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961225 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961279 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961329 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961367 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961457 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961510 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961610 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961665 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961700 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.961724 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grqxs\" (UniqueName: \"kubernetes.io/projected/d259192c-0b25-4615-b3c1-473a23e9facf-kube-api-access-grqxs\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.963037 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.964523 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.967278 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.967631 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.977329 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.977557 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.978806 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.979496 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.980259 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.980958 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grqxs\" (UniqueName: \"kubernetes.io/projected/d259192c-0b25-4615-b3c1-473a23e9facf-kube-api-access-grqxs\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:35 crc kubenswrapper[4959]: I0121 14:00:35.986686 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:36 crc kubenswrapper[4959]: I0121 14:00:36.144542 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:00:36 crc kubenswrapper[4959]: I0121 14:00:36.276082 4959 scope.go:117] "RemoveContainer" containerID="bb43e52d206f37570ae97f69ad2c6ba727f2bf0c3942d3f72de2bcdddc2c2f38" Jan 21 14:00:36 crc kubenswrapper[4959]: I0121 14:00:36.709798 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq"] Jan 21 14:00:37 crc kubenswrapper[4959]: I0121 14:00:37.356310 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" event={"ID":"d259192c-0b25-4615-b3c1-473a23e9facf","Type":"ContainerStarted","Data":"9bfc2d077a8b5cbc25bc131e36e900b29ae6dd864545a48213a08a7d482e1aef"} Jan 21 14:00:38 crc kubenswrapper[4959]: I0121 14:00:38.365589 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" event={"ID":"d259192c-0b25-4615-b3c1-473a23e9facf","Type":"ContainerStarted","Data":"7cce843b084fe76c1bffc4f48dbb736bb50e31c46a1ccccec78615fa60003a60"} Jan 21 14:00:38 crc kubenswrapper[4959]: I0121 14:00:38.388585 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" podStartSLOduration=2.817490163 podStartE2EDuration="3.388570303s" podCreationTimestamp="2026-01-21 14:00:35 +0000 UTC" firstStartedPulling="2026-01-21 14:00:36.710878836 +0000 UTC m=+3097.673909379" lastFinishedPulling="2026-01-21 14:00:37.281958976 +0000 UTC m=+3098.244989519" observedRunningTime="2026-01-21 14:00:38.382088578 +0000 UTC m=+3099.345119121" watchObservedRunningTime="2026-01-21 14:00:38.388570303 +0000 UTC m=+3099.351600846" Jan 21 14:00:45 crc kubenswrapper[4959]: I0121 14:00:45.287357 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:00:45 crc kubenswrapper[4959]: E0121 14:00:45.288327 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.386927 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx2zl"] Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.389389 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.408971 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx2zl"] Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.447565 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qwg\" (UniqueName: \"kubernetes.io/projected/87f8efe2-3378-43a2-8214-554348d6e338-kube-api-access-g2qwg\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.447648 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-utilities\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.447945 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-catalog-content\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.550078 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-catalog-content\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.550232 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qwg\" (UniqueName: \"kubernetes.io/projected/87f8efe2-3378-43a2-8214-554348d6e338-kube-api-access-g2qwg\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.550252 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-utilities\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.550836 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-catalog-content\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.550856 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-utilities\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.571030 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qwg\" (UniqueName: \"kubernetes.io/projected/87f8efe2-3378-43a2-8214-554348d6e338-kube-api-access-g2qwg\") pod \"community-operators-sx2zl\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:47 crc kubenswrapper[4959]: I0121 14:00:47.711833 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:48 crc kubenswrapper[4959]: I0121 14:00:48.307664 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx2zl"] Jan 21 14:00:48 crc kubenswrapper[4959]: I0121 14:00:48.474312 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerStarted","Data":"99300b36f7247f921a3c10e7872102d46ef5b91e15c157d50877fbfe172e2e07"} Jan 21 14:00:49 crc kubenswrapper[4959]: I0121 14:00:49.486695 4959 generic.go:334] "Generic (PLEG): container finished" podID="87f8efe2-3378-43a2-8214-554348d6e338" containerID="13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad" exitCode=0 Jan 21 14:00:49 crc kubenswrapper[4959]: I0121 14:00:49.486787 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerDied","Data":"13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad"} Jan 21 14:00:50 crc kubenswrapper[4959]: I0121 14:00:50.499035 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerStarted","Data":"1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70"} Jan 21 14:00:51 crc kubenswrapper[4959]: I0121 14:00:51.510447 4959 generic.go:334] "Generic (PLEG): container finished" podID="87f8efe2-3378-43a2-8214-554348d6e338" containerID="1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70" exitCode=0 Jan 21 14:00:51 crc kubenswrapper[4959]: I0121 14:00:51.510614 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerDied","Data":"1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70"} Jan 21 14:00:52 crc kubenswrapper[4959]: I0121 14:00:52.524988 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerStarted","Data":"4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71"} Jan 21 14:00:52 crc kubenswrapper[4959]: I0121 14:00:52.546375 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx2zl" podStartSLOduration=3.030400057 podStartE2EDuration="5.546359368s" podCreationTimestamp="2026-01-21 14:00:47 +0000 UTC" firstStartedPulling="2026-01-21 14:00:49.489664638 +0000 UTC m=+3110.452695201" lastFinishedPulling="2026-01-21 14:00:52.005623969 +0000 UTC m=+3112.968654512" observedRunningTime="2026-01-21 14:00:52.542118813 +0000 UTC m=+3113.505149356" watchObservedRunningTime="2026-01-21 14:00:52.546359368 +0000 UTC m=+3113.509389911" Jan 21 14:00:57 crc kubenswrapper[4959]: I0121 14:00:57.712979 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:57 crc kubenswrapper[4959]: I0121 14:00:57.713784 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:57 crc kubenswrapper[4959]: I0121 14:00:57.755937 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:58 crc kubenswrapper[4959]: I0121 14:00:58.286072 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:00:58 crc kubenswrapper[4959]: E0121 14:00:58.286351 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:00:58 crc kubenswrapper[4959]: I0121 14:00:58.608806 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:00:58 crc kubenswrapper[4959]: I0121 14:00:58.654251 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx2zl"] Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.133526 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483401-4qt94"] Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.135162 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.149783 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483401-4qt94"] Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.289231 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-combined-ca-bundle\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.289641 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-config-data\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.289686 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4zhg\" (UniqueName: \"kubernetes.io/projected/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-kube-api-access-b4zhg\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.289782 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-fernet-keys\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.391386 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-fernet-keys\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.391567 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-combined-ca-bundle\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.391666 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-config-data\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.391709 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4zhg\" (UniqueName: \"kubernetes.io/projected/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-kube-api-access-b4zhg\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.400556 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-fernet-keys\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.401721 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-config-data\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.405146 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-combined-ca-bundle\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.409503 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4zhg\" (UniqueName: \"kubernetes.io/projected/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-kube-api-access-b4zhg\") pod \"keystone-cron-29483401-4qt94\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.456693 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.591713 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sx2zl" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="registry-server" containerID="cri-o://4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71" gracePeriod=2 Jan 21 14:01:00 crc kubenswrapper[4959]: I0121 14:01:00.900774 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483401-4qt94"] Jan 21 14:01:00 crc kubenswrapper[4959]: W0121 14:01:00.907131 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecf462b_1ac7_4465_a213_fb1ffcb3c8b3.slice/crio-7ff9ebc508c2d9e5c1ae7a358a2477084debfb1dc93ed8c4ece41adbd1f9c794 WatchSource:0}: Error finding container 7ff9ebc508c2d9e5c1ae7a358a2477084debfb1dc93ed8c4ece41adbd1f9c794: Status 404 returned error can't find the container with id 7ff9ebc508c2d9e5c1ae7a358a2477084debfb1dc93ed8c4ece41adbd1f9c794 Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.516504 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.599246 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4qt94" event={"ID":"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3","Type":"ContainerStarted","Data":"15950c588f3c6c75621561dc535d0ba5bd14bb3d51b65be5c3cc51a85e9beff0"} Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.599295 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4qt94" event={"ID":"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3","Type":"ContainerStarted","Data":"7ff9ebc508c2d9e5c1ae7a358a2477084debfb1dc93ed8c4ece41adbd1f9c794"} Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.602387 4959 generic.go:334] "Generic (PLEG): container finished" podID="87f8efe2-3378-43a2-8214-554348d6e338" containerID="4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71" exitCode=0 Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.602419 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerDied","Data":"4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71"} Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.602437 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx2zl" event={"ID":"87f8efe2-3378-43a2-8214-554348d6e338","Type":"ContainerDied","Data":"99300b36f7247f921a3c10e7872102d46ef5b91e15c157d50877fbfe172e2e07"} Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.602453 4959 scope.go:117] "RemoveContainer" containerID="4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.602554 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx2zl" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.618940 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2qwg\" (UniqueName: \"kubernetes.io/projected/87f8efe2-3378-43a2-8214-554348d6e338-kube-api-access-g2qwg\") pod \"87f8efe2-3378-43a2-8214-554348d6e338\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.619028 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-utilities\") pod \"87f8efe2-3378-43a2-8214-554348d6e338\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.619161 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-catalog-content\") pod \"87f8efe2-3378-43a2-8214-554348d6e338\" (UID: \"87f8efe2-3378-43a2-8214-554348d6e338\") " Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.620041 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483401-4qt94" podStartSLOduration=1.620029979 podStartE2EDuration="1.620029979s" podCreationTimestamp="2026-01-21 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:01:01.61859548 +0000 UTC m=+3122.581626023" watchObservedRunningTime="2026-01-21 14:01:01.620029979 +0000 UTC m=+3122.583060522" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.621397 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-utilities" (OuterVolumeSpecName: "utilities") pod "87f8efe2-3378-43a2-8214-554348d6e338" (UID: "87f8efe2-3378-43a2-8214-554348d6e338"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.623092 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.624541 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f8efe2-3378-43a2-8214-554348d6e338-kube-api-access-g2qwg" (OuterVolumeSpecName: "kube-api-access-g2qwg") pod "87f8efe2-3378-43a2-8214-554348d6e338" (UID: "87f8efe2-3378-43a2-8214-554348d6e338"). InnerVolumeSpecName "kube-api-access-g2qwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.627394 4959 scope.go:117] "RemoveContainer" containerID="1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.679579 4959 scope.go:117] "RemoveContainer" containerID="13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.683370 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f8efe2-3378-43a2-8214-554348d6e338" (UID: "87f8efe2-3378-43a2-8214-554348d6e338"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.718195 4959 scope.go:117] "RemoveContainer" containerID="4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71" Jan 21 14:01:01 crc kubenswrapper[4959]: E0121 14:01:01.718656 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71\": container with ID starting with 4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71 not found: ID does not exist" containerID="4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.718711 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71"} err="failed to get container status \"4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71\": rpc error: code = NotFound desc = could not find container \"4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71\": container with ID starting with 4919646199a933583b035f08ea1122b45808ed0e06649c8ac54a4df9831ffb71 not found: ID does not exist" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.718735 4959 scope.go:117] "RemoveContainer" containerID="1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70" Jan 21 14:01:01 crc kubenswrapper[4959]: E0121 14:01:01.719317 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70\": container with ID starting with 1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70 not found: ID does not exist" containerID="1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.719348 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70"} err="failed to get container status \"1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70\": rpc error: code = NotFound desc = could not find container \"1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70\": container with ID starting with 1049cbdfe86ae7b091e0825890a8ff4cafca5cc8b33d0dfbc5c741e9d22cae70 not found: ID does not exist" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.719369 4959 scope.go:117] "RemoveContainer" containerID="13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad" Jan 21 14:01:01 crc kubenswrapper[4959]: E0121 14:01:01.719692 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad\": container with ID starting with 13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad not found: ID does not exist" containerID="13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.719734 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad"} err="failed to get container status \"13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad\": rpc error: code = NotFound desc = could not find container \"13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad\": container with ID starting with 13b1399ba348d46edeacd523bd5364c8aea02c271c2987a71048e0852b8d3aad not found: ID does not exist" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.725448 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2qwg\" (UniqueName: \"kubernetes.io/projected/87f8efe2-3378-43a2-8214-554348d6e338-kube-api-access-g2qwg\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.725484 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f8efe2-3378-43a2-8214-554348d6e338-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.946140 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx2zl"] Jan 21 14:01:01 crc kubenswrapper[4959]: I0121 14:01:01.953686 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sx2zl"] Jan 21 14:01:03 crc kubenswrapper[4959]: I0121 14:01:03.297186 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f8efe2-3378-43a2-8214-554348d6e338" path="/var/lib/kubelet/pods/87f8efe2-3378-43a2-8214-554348d6e338/volumes" Jan 21 14:01:03 crc kubenswrapper[4959]: I0121 14:01:03.623704 4959 generic.go:334] "Generic (PLEG): container finished" podID="9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" containerID="15950c588f3c6c75621561dc535d0ba5bd14bb3d51b65be5c3cc51a85e9beff0" exitCode=0 Jan 21 14:01:03 crc kubenswrapper[4959]: I0121 14:01:03.623808 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4qt94" event={"ID":"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3","Type":"ContainerDied","Data":"15950c588f3c6c75621561dc535d0ba5bd14bb3d51b65be5c3cc51a85e9beff0"} Jan 21 14:01:04 crc kubenswrapper[4959]: I0121 14:01:04.936284 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.090128 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-combined-ca-bundle\") pod \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.090251 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-fernet-keys\") pod \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.091360 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4zhg\" (UniqueName: \"kubernetes.io/projected/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-kube-api-access-b4zhg\") pod \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.091444 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-config-data\") pod \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\" (UID: \"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3\") " Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.096677 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" (UID: "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.097439 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-kube-api-access-b4zhg" (OuterVolumeSpecName: "kube-api-access-b4zhg") pod "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" (UID: "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3"). InnerVolumeSpecName "kube-api-access-b4zhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.116937 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" (UID: "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.148503 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-config-data" (OuterVolumeSpecName: "config-data") pod "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" (UID: "9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.193728 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4zhg\" (UniqueName: \"kubernetes.io/projected/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-kube-api-access-b4zhg\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.193769 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.193780 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.193789 4959 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.642931 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4qt94" event={"ID":"9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3","Type":"ContainerDied","Data":"7ff9ebc508c2d9e5c1ae7a358a2477084debfb1dc93ed8c4ece41adbd1f9c794"} Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.642978 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff9ebc508c2d9e5c1ae7a358a2477084debfb1dc93ed8c4ece41adbd1f9c794" Jan 21 14:01:05 crc kubenswrapper[4959]: I0121 14:01:05.643044 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4qt94" Jan 21 14:01:13 crc kubenswrapper[4959]: I0121 14:01:13.286170 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:01:13 crc kubenswrapper[4959]: E0121 14:01:13.287024 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:01:27 crc kubenswrapper[4959]: I0121 14:01:27.286706 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:01:27 crc kubenswrapper[4959]: E0121 14:01:27.287626 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:01:42 crc kubenswrapper[4959]: I0121 14:01:42.286322 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:01:42 crc kubenswrapper[4959]: E0121 14:01:42.287573 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:01:55 crc kubenswrapper[4959]: I0121 14:01:55.286420 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:01:55 crc kubenswrapper[4959]: E0121 14:01:55.287228 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:02:07 crc kubenswrapper[4959]: I0121 14:02:07.286246 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:02:07 crc kubenswrapper[4959]: E0121 14:02:07.288016 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:02:22 crc kubenswrapper[4959]: I0121 14:02:22.287115 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:02:22 crc kubenswrapper[4959]: E0121 14:02:22.288378 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:02:33 crc kubenswrapper[4959]: I0121 14:02:33.286755 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:02:33 crc kubenswrapper[4959]: E0121 14:02:33.288477 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:02:45 crc kubenswrapper[4959]: I0121 14:02:45.286911 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:02:45 crc kubenswrapper[4959]: E0121 14:02:45.287757 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:02:58 crc kubenswrapper[4959]: I0121 14:02:58.286806 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:02:58 crc kubenswrapper[4959]: I0121 14:02:58.583954 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"5c5b09cfc927c1cbd8d659e888e39bb3beea0a8126bf7572b394973bb1f27a34"} Jan 21 14:03:35 crc kubenswrapper[4959]: I0121 14:03:35.916397 4959 generic.go:334] "Generic (PLEG): container finished" podID="d259192c-0b25-4615-b3c1-473a23e9facf" containerID="7cce843b084fe76c1bffc4f48dbb736bb50e31c46a1ccccec78615fa60003a60" exitCode=0 Jan 21 14:03:35 crc kubenswrapper[4959]: I0121 14:03:35.916509 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" event={"ID":"d259192c-0b25-4615-b3c1-473a23e9facf","Type":"ContainerDied","Data":"7cce843b084fe76c1bffc4f48dbb736bb50e31c46a1ccccec78615fa60003a60"} Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.378631 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.541793 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-0\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.541865 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-1\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.541924 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-1\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542013 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grqxs\" (UniqueName: \"kubernetes.io/projected/d259192c-0b25-4615-b3c1-473a23e9facf-kube-api-access-grqxs\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542060 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ssh-key-openstack-edpm-ipam\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542079 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-nova-extra-config-0\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542150 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ceph\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542221 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-inventory\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542240 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-ceph-nova-0\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542282 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-custom-ceph-combined-ca-bundle\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.542309 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-0\") pod \"d259192c-0b25-4615-b3c1-473a23e9facf\" (UID: \"d259192c-0b25-4615-b3c1-473a23e9facf\") " Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.549032 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.555462 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d259192c-0b25-4615-b3c1-473a23e9facf-kube-api-access-grqxs" (OuterVolumeSpecName: "kube-api-access-grqxs") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "kube-api-access-grqxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.556196 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ceph" (OuterVolumeSpecName: "ceph") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.574562 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.574996 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.576435 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.578411 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.578884 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-inventory" (OuterVolumeSpecName: "inventory") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.588310 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.598350 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.600109 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d259192c-0b25-4615-b3c1-473a23e9facf" (UID: "d259192c-0b25-4615-b3c1-473a23e9facf"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.644998 4959 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645043 4959 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645054 4959 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645065 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grqxs\" (UniqueName: \"kubernetes.io/projected/d259192c-0b25-4615-b3c1-473a23e9facf-kube-api-access-grqxs\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645074 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645091 4959 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645138 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645151 4959 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645162 4959 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/d259192c-0b25-4615-b3c1-473a23e9facf-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645176 4959 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.645190 4959 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d259192c-0b25-4615-b3c1-473a23e9facf-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.951765 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" event={"ID":"d259192c-0b25-4615-b3c1-473a23e9facf","Type":"ContainerDied","Data":"9bfc2d077a8b5cbc25bc131e36e900b29ae6dd864545a48213a08a7d482e1aef"} Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.951816 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bfc2d077a8b5cbc25bc131e36e900b29ae6dd864545a48213a08a7d482e1aef" Jan 21 14:03:37 crc kubenswrapper[4959]: I0121 14:03:37.951833 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.956237 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 14:03:52 crc kubenswrapper[4959]: E0121 14:03:52.957575 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="registry-server" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957594 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="registry-server" Jan 21 14:03:52 crc kubenswrapper[4959]: E0121 14:03:52.957609 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d259192c-0b25-4615-b3c1-473a23e9facf" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957618 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="d259192c-0b25-4615-b3c1-473a23e9facf" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 14:03:52 crc kubenswrapper[4959]: E0121 14:03:52.957633 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="extract-content" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957640 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="extract-content" Jan 21 14:03:52 crc kubenswrapper[4959]: E0121 14:03:52.957653 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="extract-utilities" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957659 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="extract-utilities" Jan 21 14:03:52 crc kubenswrapper[4959]: E0121 14:03:52.957690 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" containerName="keystone-cron" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957696 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" containerName="keystone-cron" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957889 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3" containerName="keystone-cron" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957912 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f8efe2-3378-43a2-8214-554348d6e338" containerName="registry-server" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.957921 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="d259192c-0b25-4615-b3c1-473a23e9facf" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.959231 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.962901 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.964434 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 21 14:03:52 crc kubenswrapper[4959]: I0121 14:03:52.977425 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.039955 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.041396 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.048709 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.057727 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.094890 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.094936 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.094970 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.094996 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-sys\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.095027 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-dev\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.095058 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.095159 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.095452 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096158 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/946dc99c-def1-464a-87fe-7a5a8b46b325-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096231 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096264 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096290 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-run\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096370 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096393 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096450 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzfl\" (UniqueName: \"kubernetes.io/projected/946dc99c-def1-464a-87fe-7a5a8b46b325-kube-api-access-dbzfl\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.096484 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.197727 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.197831 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.197864 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/946dc99c-def1-464a-87fe-7a5a8b46b325-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.197911 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.197938 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.197969 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-run\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198013 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198032 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198078 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-scripts\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198112 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-run\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198183 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198212 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198241 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c638106d-abd9-4707-8da4-b5c5d1c30f57-ceph\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198274 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198281 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198327 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198392 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7nw\" (UniqueName: \"kubernetes.io/projected/c638106d-abd9-4707-8da4-b5c5d1c30f57-kube-api-access-5m7nw\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198425 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198439 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198534 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-run\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198610 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzfl\" (UniqueName: \"kubernetes.io/projected/946dc99c-def1-464a-87fe-7a5a8b46b325-kube-api-access-dbzfl\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198668 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198704 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-lib-modules\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198765 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198805 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198831 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198856 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-config-data\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198887 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.198965 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199000 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-sys\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199021 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199055 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-sys\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199077 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-dev\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199145 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199153 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199215 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199317 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199364 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-dev\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199522 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199546 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-sys\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199571 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-dev\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.199666 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/946dc99c-def1-464a-87fe-7a5a8b46b325-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.206209 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.206222 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/946dc99c-def1-464a-87fe-7a5a8b46b325-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.206393 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.206643 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.207596 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946dc99c-def1-464a-87fe-7a5a8b46b325-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.220322 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzfl\" (UniqueName: \"kubernetes.io/projected/946dc99c-def1-464a-87fe-7a5a8b46b325-kube-api-access-dbzfl\") pod \"cinder-volume-volume1-0\" (UID: \"946dc99c-def1-464a-87fe-7a5a8b46b325\") " pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.285843 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301657 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301719 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301750 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-config-data\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301798 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-sys\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301819 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301897 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301922 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-dev\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.301942 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-scripts\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302031 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c638106d-abd9-4707-8da4-b5c5d1c30f57-ceph\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302051 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302077 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m7nw\" (UniqueName: \"kubernetes.io/projected/c638106d-abd9-4707-8da4-b5c5d1c30f57-kube-api-access-5m7nw\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302134 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302155 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302183 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-run\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302220 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-lib-modules\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302322 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-lib-modules\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302891 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.302955 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.303360 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.303360 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-sys\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.303446 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-dev\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.303753 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.304120 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.304144 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.304171 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c638106d-abd9-4707-8da4-b5c5d1c30f57-run\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.306525 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c638106d-abd9-4707-8da4-b5c5d1c30f57-ceph\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.307023 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.307511 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-config-data\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.307629 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-scripts\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.310938 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c638106d-abd9-4707-8da4-b5c5d1c30f57-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.324892 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m7nw\" (UniqueName: \"kubernetes.io/projected/c638106d-abd9-4707-8da4-b5c5d1c30f57-kube-api-access-5m7nw\") pod \"cinder-backup-0\" (UID: \"c638106d-abd9-4707-8da4-b5c5d1c30f57\") " pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.362802 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.612449 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-jnfwt"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.613910 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.643877 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jnfwt"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.712276 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4fvb\" (UniqueName: \"kubernetes.io/projected/44f71bcf-72a3-4877-bf31-c4c4ae441f70-kube-api-access-g4fvb\") pod \"manila-db-create-jnfwt\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.712358 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44f71bcf-72a3-4877-bf31-c4c4ae441f70-operator-scripts\") pod \"manila-db-create-jnfwt\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.722537 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-eb24-account-create-update-g2x4m"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.723592 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.731283 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.744108 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-eb24-account-create-update-g2x4m"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.789978 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f99fb4f97-btmqc"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.800969 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.803980 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.804006 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-8h9t4" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.804163 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.804352 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.816404 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjj7s\" (UniqueName: \"kubernetes.io/projected/25b2bbae-a802-455f-8aa1-c4b1f744271a-kube-api-access-qjj7s\") pod \"manila-eb24-account-create-update-g2x4m\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.816450 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b2bbae-a802-455f-8aa1-c4b1f744271a-operator-scripts\") pod \"manila-eb24-account-create-update-g2x4m\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.816492 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4fvb\" (UniqueName: \"kubernetes.io/projected/44f71bcf-72a3-4877-bf31-c4c4ae441f70-kube-api-access-g4fvb\") pod \"manila-db-create-jnfwt\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.816567 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44f71bcf-72a3-4877-bf31-c4c4ae441f70-operator-scripts\") pod \"manila-db-create-jnfwt\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.817413 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44f71bcf-72a3-4877-bf31-c4c4ae441f70-operator-scripts\") pod \"manila-db-create-jnfwt\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.811350 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f99fb4f97-btmqc"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.838767 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.848285 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.856450 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.856671 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.856749 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4fvb\" (UniqueName: \"kubernetes.io/projected/44f71bcf-72a3-4877-bf31-c4c4ae441f70-kube-api-access-g4fvb\") pod \"manila-db-create-jnfwt\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.856812 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.856925 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cn2hl" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.902254 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918629 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918695 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-config-data\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918722 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-scripts\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918805 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6db8d66c-73bb-4ce5-81db-4254e41e78ad-horizon-secret-key\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918850 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918873 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.918892 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919007 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-scripts\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919057 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-logs\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919085 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6db8d66c-73bb-4ce5-81db-4254e41e78ad-logs\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919163 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkp8\" (UniqueName: \"kubernetes.io/projected/6db8d66c-73bb-4ce5-81db-4254e41e78ad-kube-api-access-jhkp8\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919185 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-ceph\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919212 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhf29\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-kube-api-access-qhf29\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919231 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjj7s\" (UniqueName: \"kubernetes.io/projected/25b2bbae-a802-455f-8aa1-c4b1f744271a-kube-api-access-qjj7s\") pod \"manila-eb24-account-create-update-g2x4m\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919263 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-config-data\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.919287 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b2bbae-a802-455f-8aa1-c4b1f744271a-operator-scripts\") pod \"manila-eb24-account-create-update-g2x4m\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.920115 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b2bbae-a802-455f-8aa1-c4b1f744271a-operator-scripts\") pod \"manila-eb24-account-create-update-g2x4m\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.937523 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.939361 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.942960 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.950167 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.950422 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.954022 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjj7s\" (UniqueName: \"kubernetes.io/projected/25b2bbae-a802-455f-8aa1-c4b1f744271a-kube-api-access-qjj7s\") pod \"manila-eb24-account-create-update-g2x4m\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.975135 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:03:53 crc kubenswrapper[4959]: I0121 14:03:53.995582 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cdc676b89-n65dj"] Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.001656 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.020763 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-scripts\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.020836 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-logs\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.020868 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6db8d66c-73bb-4ce5-81db-4254e41e78ad-logs\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.020922 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkp8\" (UniqueName: \"kubernetes.io/projected/6db8d66c-73bb-4ce5-81db-4254e41e78ad-kube-api-access-jhkp8\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.020951 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-ceph\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.020975 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhf29\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-kube-api-access-qhf29\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021006 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-config-data\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021120 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021189 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-config-data\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021216 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-scripts\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021254 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6db8d66c-73bb-4ce5-81db-4254e41e78ad-horizon-secret-key\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021282 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021306 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021329 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.021895 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.022731 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-scripts\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.023029 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-logs\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.023422 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6db8d66c-73bb-4ce5-81db-4254e41e78ad-logs\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.027717 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-ceph\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.036514 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.039235 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-config-data\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.040577 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6db8d66c-73bb-4ce5-81db-4254e41e78ad-horizon-secret-key\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.047341 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.050110 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-config-data\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.050565 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.056069 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.062157 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdc676b89-n65dj"] Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.069063 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhf29\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-kube-api-access-qhf29\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.074314 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-scripts\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.092643 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkp8\" (UniqueName: \"kubernetes.io/projected/6db8d66c-73bb-4ce5-81db-4254e41e78ad-kube-api-access-jhkp8\") pod \"horizon-f99fb4f97-btmqc\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135578 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135672 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135701 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135783 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135810 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135876 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135938 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135969 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.135992 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r7hh\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-kube-api-access-9r7hh\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.153289 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.171395 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.206691 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.217991 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.238765 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.238818 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r7hh\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-kube-api-access-9r7hh\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.238845 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.238879 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-horizon-secret-key\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.238914 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.238938 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239005 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239029 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239052 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-logs\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239118 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239137 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-scripts\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239176 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-config-data\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239222 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvq5\" (UniqueName: \"kubernetes.io/projected/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-kube-api-access-kzvq5\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.239248 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.240044 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.240517 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.242492 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.290133 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.310043 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.315805 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.316559 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r7hh\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-kube-api-access-9r7hh\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.339700 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.340549 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.342533 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.353006 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.361641 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-logs\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.361871 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-scripts\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.361967 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-config-data\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.362043 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvq5\" (UniqueName: \"kubernetes.io/projected/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-kube-api-access-kzvq5\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.362199 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-horizon-secret-key\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.362882 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-logs\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.363173 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-scripts\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.364140 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-config-data\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.387773 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvq5\" (UniqueName: \"kubernetes.io/projected/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-kube-api-access-kzvq5\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.395341 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"946dc99c-def1-464a-87fe-7a5a8b46b325","Type":"ContainerStarted","Data":"54a222498f46223c888cc5e0b6c6cfac90f644bef2e5176bdaddb221dffbe4b3"} Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.395493 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-horizon-secret-key\") pod \"horizon-cdc676b89-n65dj\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.411686 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.597831 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.758054 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jnfwt"] Jan 21 14:03:54 crc kubenswrapper[4959]: W0121 14:03:54.767872 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44f71bcf_72a3_4877_bf31_c4c4ae441f70.slice/crio-ce1eb17c3a1246d60f252ae0653d75c40f39d1e1c41986026a5002a930e064cb WatchSource:0}: Error finding container ce1eb17c3a1246d60f252ae0653d75c40f39d1e1c41986026a5002a930e064cb: Status 404 returned error can't find the container with id ce1eb17c3a1246d60f252ae0653d75c40f39d1e1c41986026a5002a930e064cb Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.832970 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 14:03:54 crc kubenswrapper[4959]: I0121 14:03:54.982448 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-eb24-account-create-update-g2x4m"] Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.117871 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f99fb4f97-btmqc"] Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.204425 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdc676b89-n65dj"] Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.357163 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.407487 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-eb24-account-create-update-g2x4m" event={"ID":"25b2bbae-a802-455f-8aa1-c4b1f744271a","Type":"ContainerStarted","Data":"63b813ced5ac62b788bea31d5de8ff3f359b5c1190f04d96057770ee392c100b"} Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.409505 4959 generic.go:334] "Generic (PLEG): container finished" podID="44f71bcf-72a3-4877-bf31-c4c4ae441f70" containerID="1a56a61ad6a1fae7faf97c96035d45cc3a964e742123c8ceb85b8e3ff267b8f5" exitCode=0 Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.409560 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jnfwt" event={"ID":"44f71bcf-72a3-4877-bf31-c4c4ae441f70","Type":"ContainerDied","Data":"1a56a61ad6a1fae7faf97c96035d45cc3a964e742123c8ceb85b8e3ff267b8f5"} Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.409580 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jnfwt" event={"ID":"44f71bcf-72a3-4877-bf31-c4c4ae441f70","Type":"ContainerStarted","Data":"ce1eb17c3a1246d60f252ae0653d75c40f39d1e1c41986026a5002a930e064cb"} Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.411359 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f99fb4f97-btmqc" event={"ID":"6db8d66c-73bb-4ce5-81db-4254e41e78ad","Type":"ContainerStarted","Data":"e5c6b4c9c82d8a3d9508035fb804a6e7f1523f0f40d01f3c2dd867e9c25b4524"} Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.413716 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c638106d-abd9-4707-8da4-b5c5d1c30f57","Type":"ContainerStarted","Data":"ee5d5c5851223d3feeab7b15a46e98175400f1caa96a58b2d6b9aea3ae789b26"} Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.417887 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7988e7af-bc2e-49a5-bdab-420a34112f4b","Type":"ContainerStarted","Data":"658dfd625582ac578e254da943ebb36684c2c352e7cf0d5efad2bba181746678"} Jan 21 14:03:55 crc kubenswrapper[4959]: I0121 14:03:55.419289 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc676b89-n65dj" event={"ID":"3f3d4806-6323-4d63-a9bc-0b6c29d95b45","Type":"ContainerStarted","Data":"71057f3adadf91f281c77200eb5644a309387eb03e2918fa6ab414c03aa10d1b"} Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.271630 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.450921 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7988e7af-bc2e-49a5-bdab-420a34112f4b","Type":"ContainerStarted","Data":"4b4a929cbe8d95e178f2b9db0dc9e9ed5b7da81869daf91656abb16dbf5b248f"} Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.455316 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0511f1e1-170f-4645-9910-b92f1abb1ae6","Type":"ContainerStarted","Data":"f42e004648c986ae19218bbc1fed3e5637784e3156aa0a8a098a2a9c584ffa78"} Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.458957 4959 generic.go:334] "Generic (PLEG): container finished" podID="25b2bbae-a802-455f-8aa1-c4b1f744271a" containerID="6c9629111d28cf2f157e64747735ffc0acccf4240d1f6908945182b7348a2ff0" exitCode=0 Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.459047 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-eb24-account-create-update-g2x4m" event={"ID":"25b2bbae-a802-455f-8aa1-c4b1f744271a","Type":"ContainerDied","Data":"6c9629111d28cf2f157e64747735ffc0acccf4240d1f6908945182b7348a2ff0"} Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.463928 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"946dc99c-def1-464a-87fe-7a5a8b46b325","Type":"ContainerStarted","Data":"5a1bc39ec0d48d7fe0554a47c9e8ce09267a970fa4902f2e98772050459946b9"} Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.463993 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"946dc99c-def1-464a-87fe-7a5a8b46b325","Type":"ContainerStarted","Data":"c313ec6fdbf12c3b1700ff3563b6bc5872ab5e4428dad97a923e22ce29e0b50f"} Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.521450 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.6354441619999998 podStartE2EDuration="4.521429188s" podCreationTimestamp="2026-01-21 14:03:52 +0000 UTC" firstStartedPulling="2026-01-21 14:03:54.352824413 +0000 UTC m=+3295.315854956" lastFinishedPulling="2026-01-21 14:03:55.238809439 +0000 UTC m=+3296.201839982" observedRunningTime="2026-01-21 14:03:56.517361638 +0000 UTC m=+3297.480392181" watchObservedRunningTime="2026-01-21 14:03:56.521429188 +0000 UTC m=+3297.484459731" Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.895683 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.958515 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4fvb\" (UniqueName: \"kubernetes.io/projected/44f71bcf-72a3-4877-bf31-c4c4ae441f70-kube-api-access-g4fvb\") pod \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.958973 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44f71bcf-72a3-4877-bf31-c4c4ae441f70-operator-scripts\") pod \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\" (UID: \"44f71bcf-72a3-4877-bf31-c4c4ae441f70\") " Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.960400 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f71bcf-72a3-4877-bf31-c4c4ae441f70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44f71bcf-72a3-4877-bf31-c4c4ae441f70" (UID: "44f71bcf-72a3-4877-bf31-c4c4ae441f70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:03:56 crc kubenswrapper[4959]: I0121 14:03:56.971990 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f71bcf-72a3-4877-bf31-c4c4ae441f70-kube-api-access-g4fvb" (OuterVolumeSpecName: "kube-api-access-g4fvb") pod "44f71bcf-72a3-4877-bf31-c4c4ae441f70" (UID: "44f71bcf-72a3-4877-bf31-c4c4ae441f70"). InnerVolumeSpecName "kube-api-access-g4fvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.062789 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4fvb\" (UniqueName: \"kubernetes.io/projected/44f71bcf-72a3-4877-bf31-c4c4ae441f70-kube-api-access-g4fvb\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.062828 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44f71bcf-72a3-4877-bf31-c4c4ae441f70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.121236 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdc676b89-n65dj"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.270179 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c5b8cfdcd-l422b"] Jan 21 14:03:57 crc kubenswrapper[4959]: E0121 14:03:57.270604 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f71bcf-72a3-4877-bf31-c4c4ae441f70" containerName="mariadb-database-create" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.270620 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f71bcf-72a3-4877-bf31-c4c4ae441f70" containerName="mariadb-database-create" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.270791 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f71bcf-72a3-4877-bf31-c4c4ae441f70" containerName="mariadb-database-create" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.271739 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.281364 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.331489 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c5b8cfdcd-l422b"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.331525 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.370778 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce473e12-c4b4-48e6-958f-f4416083667a-logs\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.371153 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-scripts\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.371212 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wzl\" (UniqueName: \"kubernetes.io/projected/ce473e12-c4b4-48e6-958f-f4416083667a-kube-api-access-g9wzl\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.371234 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-tls-certs\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.371262 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-secret-key\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.371285 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-combined-ca-bundle\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.371305 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-config-data\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.416340 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f99fb4f97-btmqc"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476043 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wzl\" (UniqueName: \"kubernetes.io/projected/ce473e12-c4b4-48e6-958f-f4416083667a-kube-api-access-g9wzl\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476128 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-tls-certs\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476192 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-secret-key\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476242 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-combined-ca-bundle\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476271 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-config-data\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476393 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce473e12-c4b4-48e6-958f-f4416083667a-logs\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.476451 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-scripts\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.477535 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce473e12-c4b4-48e6-958f-f4416083667a-logs\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.478076 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-scripts\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.478758 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-config-data\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.485428 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-combined-ca-bundle\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.486848 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-tls-certs\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.491576 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77d7b8cf98-z8vcx"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.493192 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.495443 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-secret-key\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.509693 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wzl\" (UniqueName: \"kubernetes.io/projected/ce473e12-c4b4-48e6-958f-f4416083667a-kube-api-access-g9wzl\") pod \"horizon-c5b8cfdcd-l422b\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.521127 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77d7b8cf98-z8vcx"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.538071 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jnfwt" event={"ID":"44f71bcf-72a3-4877-bf31-c4c4ae441f70","Type":"ContainerDied","Data":"ce1eb17c3a1246d60f252ae0653d75c40f39d1e1c41986026a5002a930e064cb"} Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.538142 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1eb17c3a1246d60f252ae0653d75c40f39d1e1c41986026a5002a930e064cb" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.539162 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jnfwt" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.605568 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.637652 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.681909 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-horizon-secret-key\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.682179 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-combined-ca-bundle\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.682256 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79766a29-f585-4567-b158-2506c12277cb-logs\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.682279 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79766a29-f585-4567-b158-2506c12277cb-config-data\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.682308 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcrc\" (UniqueName: \"kubernetes.io/projected/79766a29-f585-4567-b158-2506c12277cb-kube-api-access-xhcrc\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.682399 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-horizon-tls-certs\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.682415 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79766a29-f585-4567-b158-2506c12277cb-scripts\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.783866 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-horizon-secret-key\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.783916 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-combined-ca-bundle\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.783952 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79766a29-f585-4567-b158-2506c12277cb-logs\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.783969 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79766a29-f585-4567-b158-2506c12277cb-config-data\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.783998 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcrc\" (UniqueName: \"kubernetes.io/projected/79766a29-f585-4567-b158-2506c12277cb-kube-api-access-xhcrc\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.784058 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-horizon-tls-certs\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.784075 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79766a29-f585-4567-b158-2506c12277cb-scripts\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.784948 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79766a29-f585-4567-b158-2506c12277cb-scripts\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.785054 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79766a29-f585-4567-b158-2506c12277cb-logs\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.787265 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79766a29-f585-4567-b158-2506c12277cb-config-data\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.790382 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-combined-ca-bundle\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.792584 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-horizon-secret-key\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.803872 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcrc\" (UniqueName: \"kubernetes.io/projected/79766a29-f585-4567-b158-2506c12277cb-kube-api-access-xhcrc\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.812497 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/79766a29-f585-4567-b158-2506c12277cb-horizon-tls-certs\") pod \"horizon-77d7b8cf98-z8vcx\" (UID: \"79766a29-f585-4567-b158-2506c12277cb\") " pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:57 crc kubenswrapper[4959]: I0121 14:03:57.961803 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.155205 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.195053 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjj7s\" (UniqueName: \"kubernetes.io/projected/25b2bbae-a802-455f-8aa1-c4b1f744271a-kube-api-access-qjj7s\") pod \"25b2bbae-a802-455f-8aa1-c4b1f744271a\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.196297 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b2bbae-a802-455f-8aa1-c4b1f744271a-operator-scripts\") pod \"25b2bbae-a802-455f-8aa1-c4b1f744271a\" (UID: \"25b2bbae-a802-455f-8aa1-c4b1f744271a\") " Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.197754 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b2bbae-a802-455f-8aa1-c4b1f744271a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25b2bbae-a802-455f-8aa1-c4b1f744271a" (UID: "25b2bbae-a802-455f-8aa1-c4b1f744271a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.208124 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b2bbae-a802-455f-8aa1-c4b1f744271a-kube-api-access-qjj7s" (OuterVolumeSpecName: "kube-api-access-qjj7s") pod "25b2bbae-a802-455f-8aa1-c4b1f744271a" (UID: "25b2bbae-a802-455f-8aa1-c4b1f744271a"). InnerVolumeSpecName "kube-api-access-qjj7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.286975 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.300026 4959 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b2bbae-a802-455f-8aa1-c4b1f744271a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.300068 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjj7s\" (UniqueName: \"kubernetes.io/projected/25b2bbae-a802-455f-8aa1-c4b1f744271a-kube-api-access-qjj7s\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.328375 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c5b8cfdcd-l422b"] Jan 21 14:03:58 crc kubenswrapper[4959]: W0121 14:03:58.335796 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce473e12_c4b4_48e6_958f_f4416083667a.slice/crio-e7f993d56d291385c7e0df6ca4df781bbea5173ef08b4a86d7a84fe006140676 WatchSource:0}: Error finding container e7f993d56d291385c7e0df6ca4df781bbea5173ef08b4a86d7a84fe006140676: Status 404 returned error can't find the container with id e7f993d56d291385c7e0df6ca4df781bbea5173ef08b4a86d7a84fe006140676 Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.563306 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5b8cfdcd-l422b" event={"ID":"ce473e12-c4b4-48e6-958f-f4416083667a","Type":"ContainerStarted","Data":"e7f993d56d291385c7e0df6ca4df781bbea5173ef08b4a86d7a84fe006140676"} Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.565674 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c638106d-abd9-4707-8da4-b5c5d1c30f57","Type":"ContainerStarted","Data":"2a18cf4115a2d71bac3199638c0fe4437c463b9eb8cb21f853e411b13a8fc04b"} Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.565727 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c638106d-abd9-4707-8da4-b5c5d1c30f57","Type":"ContainerStarted","Data":"b88d3c324f923f832be9ec2c43e1a7f87d13176139c7da775a9e128367919264"} Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.568548 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7988e7af-bc2e-49a5-bdab-420a34112f4b","Type":"ContainerStarted","Data":"ebd164623282abb27e8b9687f55d89518d261b9525359d6261817ab388185921"} Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.575776 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0511f1e1-170f-4645-9910-b92f1abb1ae6","Type":"ContainerStarted","Data":"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc"} Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.578029 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-eb24-account-create-update-g2x4m" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.578064 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-eb24-account-create-update-g2x4m" event={"ID":"25b2bbae-a802-455f-8aa1-c4b1f744271a","Type":"ContainerDied","Data":"63b813ced5ac62b788bea31d5de8ff3f359b5c1190f04d96057770ee392c100b"} Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.578124 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b813ced5ac62b788bea31d5de8ff3f359b5c1190f04d96057770ee392c100b" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.579348 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77d7b8cf98-z8vcx"] Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.590558 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.603356891 podStartE2EDuration="5.590539606s" podCreationTimestamp="2026-01-21 14:03:53 +0000 UTC" firstStartedPulling="2026-01-21 14:03:54.842463656 +0000 UTC m=+3295.805494189" lastFinishedPulling="2026-01-21 14:03:56.829646361 +0000 UTC m=+3297.792676904" observedRunningTime="2026-01-21 14:03:58.589234201 +0000 UTC m=+3299.552264754" watchObservedRunningTime="2026-01-21 14:03:58.590539606 +0000 UTC m=+3299.553570149" Jan 21 14:03:58 crc kubenswrapper[4959]: I0121 14:03:58.632307 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.632286244 podStartE2EDuration="5.632286244s" podCreationTimestamp="2026-01-21 14:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:03:58.612121439 +0000 UTC m=+3299.575151982" watchObservedRunningTime="2026-01-21 14:03:58.632286244 +0000 UTC m=+3299.595316787" Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.591998 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0511f1e1-170f-4645-9910-b92f1abb1ae6","Type":"ContainerStarted","Data":"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218"} Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.592169 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-log" containerID="cri-o://e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc" gracePeriod=30 Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.592233 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-httpd" containerID="cri-o://2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218" gracePeriod=30 Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.597908 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7b8cf98-z8vcx" event={"ID":"79766a29-f585-4567-b158-2506c12277cb","Type":"ContainerStarted","Data":"6c057d1b19abc5ff854db180857926d89d0e9d5b88b7e4023dfff894fc67ebe9"} Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.598114 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-httpd" containerID="cri-o://ebd164623282abb27e8b9687f55d89518d261b9525359d6261817ab388185921" gracePeriod=30 Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.598186 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-log" containerID="cri-o://4b4a929cbe8d95e178f2b9db0dc9e9ed5b7da81869daf91656abb16dbf5b248f" gracePeriod=30 Jan 21 14:03:59 crc kubenswrapper[4959]: I0121 14:03:59.632893 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.632874714 podStartE2EDuration="6.632874714s" podCreationTimestamp="2026-01-21 14:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:03:59.622168975 +0000 UTC m=+3300.585199538" watchObservedRunningTime="2026-01-21 14:03:59.632874714 +0000 UTC m=+3300.595905257" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.263175 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447257 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447360 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-combined-ca-bundle\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447461 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-ceph\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447866 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-scripts\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447931 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-config-data\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447953 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-public-tls-certs\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.447971 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhf29\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-kube-api-access-qhf29\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.448018 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-logs\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.448035 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-httpd-run\") pod \"0511f1e1-170f-4645-9910-b92f1abb1ae6\" (UID: \"0511f1e1-170f-4645-9910-b92f1abb1ae6\") " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.451600 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-logs" (OuterVolumeSpecName: "logs") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.451819 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.453951 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-scripts" (OuterVolumeSpecName: "scripts") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.455314 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.455323 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-kube-api-access-qhf29" (OuterVolumeSpecName: "kube-api-access-qhf29") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "kube-api-access-qhf29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.457485 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-ceph" (OuterVolumeSpecName: "ceph") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.480931 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.507702 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.530188 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-config-data" (OuterVolumeSpecName: "config-data") pod "0511f1e1-170f-4645-9910-b92f1abb1ae6" (UID: "0511f1e1-170f-4645-9910-b92f1abb1ae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556648 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556692 4959 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556704 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhf29\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-kube-api-access-qhf29\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556740 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556748 4959 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0511f1e1-170f-4645-9910-b92f1abb1ae6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556781 4959 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556792 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556813 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0511f1e1-170f-4645-9910-b92f1abb1ae6-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.556826 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0511f1e1-170f-4645-9910-b92f1abb1ae6-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.591864 4959 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611256 4959 generic.go:334] "Generic (PLEG): container finished" podID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerID="2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218" exitCode=0 Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611384 4959 generic.go:334] "Generic (PLEG): container finished" podID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerID="e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc" exitCode=143 Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611325 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611311 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0511f1e1-170f-4645-9910-b92f1abb1ae6","Type":"ContainerDied","Data":"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218"} Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611550 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0511f1e1-170f-4645-9910-b92f1abb1ae6","Type":"ContainerDied","Data":"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc"} Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611566 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0511f1e1-170f-4645-9910-b92f1abb1ae6","Type":"ContainerDied","Data":"f42e004648c986ae19218bbc1fed3e5637784e3156aa0a8a098a2a9c584ffa78"} Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.611570 4959 scope.go:117] "RemoveContainer" containerID="2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.623069 4959 generic.go:334] "Generic (PLEG): container finished" podID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerID="ebd164623282abb27e8b9687f55d89518d261b9525359d6261817ab388185921" exitCode=0 Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.623117 4959 generic.go:334] "Generic (PLEG): container finished" podID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerID="4b4a929cbe8d95e178f2b9db0dc9e9ed5b7da81869daf91656abb16dbf5b248f" exitCode=143 Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.623145 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7988e7af-bc2e-49a5-bdab-420a34112f4b","Type":"ContainerDied","Data":"ebd164623282abb27e8b9687f55d89518d261b9525359d6261817ab388185921"} Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.623178 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7988e7af-bc2e-49a5-bdab-420a34112f4b","Type":"ContainerDied","Data":"4b4a929cbe8d95e178f2b9db0dc9e9ed5b7da81869daf91656abb16dbf5b248f"} Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.658233 4959 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.665452 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.685914 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.712517 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:04:00 crc kubenswrapper[4959]: E0121 14:04:00.712896 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b2bbae-a802-455f-8aa1-c4b1f744271a" containerName="mariadb-account-create-update" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.712911 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b2bbae-a802-455f-8aa1-c4b1f744271a" containerName="mariadb-account-create-update" Jan 21 14:04:00 crc kubenswrapper[4959]: E0121 14:04:00.712941 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-log" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.712949 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-log" Jan 21 14:04:00 crc kubenswrapper[4959]: E0121 14:04:00.712967 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-httpd" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.712974 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-httpd" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.713168 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-log" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.713183 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" containerName="glance-httpd" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.713199 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b2bbae-a802-455f-8aa1-c4b1f744271a" containerName="mariadb-account-create-update" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.714018 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.714207 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.720480 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.720733 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.863793 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.863902 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.863924 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.863940 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-logs\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.864005 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.864026 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfvl\" (UniqueName: \"kubernetes.io/projected/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-kube-api-access-lkfvl\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.864047 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-ceph\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.864078 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-scripts\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.864116 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-config-data\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965480 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965522 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkfvl\" (UniqueName: \"kubernetes.io/projected/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-kube-api-access-lkfvl\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965546 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-ceph\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965602 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-scripts\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965625 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-config-data\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965696 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965759 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965822 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.965841 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-logs\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.966872 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.968023 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.969996 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-logs\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.973597 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.974413 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-config-data\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.975486 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-ceph\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.978126 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-scripts\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.990132 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkfvl\" (UniqueName: \"kubernetes.io/projected/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-kube-api-access-lkfvl\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:00 crc kubenswrapper[4959]: I0121 14:04:00.998875 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbd37aa-b9e2-4d8b-a249-ea87147b176f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:01 crc kubenswrapper[4959]: I0121 14:04:01.007187 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7dbd37aa-b9e2-4d8b-a249-ea87147b176f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:04:01 crc kubenswrapper[4959]: I0121 14:04:01.092950 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:04:01 crc kubenswrapper[4959]: I0121 14:04:01.299822 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0511f1e1-170f-4645-9910-b92f1abb1ae6" path="/var/lib/kubelet/pods/0511f1e1-170f-4645-9910-b92f1abb1ae6/volumes" Jan 21 14:04:03 crc kubenswrapper[4959]: I0121 14:04:03.363919 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 21 14:04:03 crc kubenswrapper[4959]: I0121 14:04:03.499949 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 21 14:04:03 crc kubenswrapper[4959]: I0121 14:04:03.590931 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.343065 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-ft8qw"] Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.344260 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.355749 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ft8qw"] Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.387937 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-pwgzk" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.388055 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.444707 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-job-config-data\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.444847 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj68d\" (UniqueName: \"kubernetes.io/projected/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-kube-api-access-jj68d\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.444905 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-config-data\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.445973 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-combined-ca-bundle\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.548292 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-job-config-data\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.549399 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj68d\" (UniqueName: \"kubernetes.io/projected/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-kube-api-access-jj68d\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.549484 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-config-data\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.549552 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-combined-ca-bundle\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.554052 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-job-config-data\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.554196 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-combined-ca-bundle\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.554522 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-config-data\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.571593 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj68d\" (UniqueName: \"kubernetes.io/projected/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-kube-api-access-jj68d\") pod \"manila-db-sync-ft8qw\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:04 crc kubenswrapper[4959]: I0121 14:04:04.710353 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.118322 4959 scope.go:117] "RemoveContainer" containerID="e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.323401 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.379481 4959 scope.go:117] "RemoveContainer" containerID="2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218" Jan 21 14:04:06 crc kubenswrapper[4959]: E0121 14:04:06.380070 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218\": container with ID starting with 2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218 not found: ID does not exist" containerID="2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.380132 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218"} err="failed to get container status \"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218\": rpc error: code = NotFound desc = could not find container \"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218\": container with ID starting with 2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218 not found: ID does not exist" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.380161 4959 scope.go:117] "RemoveContainer" containerID="e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc" Jan 21 14:04:06 crc kubenswrapper[4959]: E0121 14:04:06.380772 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc\": container with ID starting with e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc not found: ID does not exist" containerID="e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.380809 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc"} err="failed to get container status \"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc\": rpc error: code = NotFound desc = could not find container \"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc\": container with ID starting with e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc not found: ID does not exist" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.380828 4959 scope.go:117] "RemoveContainer" containerID="2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.383163 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218"} err="failed to get container status \"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218\": rpc error: code = NotFound desc = could not find container \"2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218\": container with ID starting with 2eb2e5c518f3c3a8caa2136123ded5529569275353223f7bd8a8345489e23218 not found: ID does not exist" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.383196 4959 scope.go:117] "RemoveContainer" containerID="e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.385299 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc"} err="failed to get container status \"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc\": rpc error: code = NotFound desc = could not find container \"e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc\": container with ID starting with e6f84d6e1c6eddef4f4f24de0cbf8e16eeb28102dfb94d90fcbe487773b077bc not found: ID does not exist" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499230 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499320 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r7hh\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-kube-api-access-9r7hh\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499353 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-httpd-run\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499488 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-combined-ca-bundle\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499631 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-scripts\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499661 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-ceph\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499770 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-internal-tls-certs\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499786 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-logs\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.499841 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-config-data\") pod \"7988e7af-bc2e-49a5-bdab-420a34112f4b\" (UID: \"7988e7af-bc2e-49a5-bdab-420a34112f4b\") " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.501512 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.501916 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-logs" (OuterVolumeSpecName: "logs") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.526149 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-kube-api-access-9r7hh" (OuterVolumeSpecName: "kube-api-access-9r7hh") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "kube-api-access-9r7hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.603936 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.604331 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r7hh\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-kube-api-access-9r7hh\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.604342 4959 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7988e7af-bc2e-49a5-bdab-420a34112f4b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.605622 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.687328 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-ceph" (OuterVolumeSpecName: "ceph") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.691526 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-scripts" (OuterVolumeSpecName: "scripts") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.706646 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.706684 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7988e7af-bc2e-49a5-bdab-420a34112f4b-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.706713 4959 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.718425 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7988e7af-bc2e-49a5-bdab-420a34112f4b","Type":"ContainerDied","Data":"658dfd625582ac578e254da943ebb36684c2c352e7cf0d5efad2bba181746678"} Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.718486 4959 scope.go:117] "RemoveContainer" containerID="ebd164623282abb27e8b9687f55d89518d261b9525359d6261817ab388185921" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.718607 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.759417 4959 scope.go:117] "RemoveContainer" containerID="4b4a929cbe8d95e178f2b9db0dc9e9ed5b7da81869daf91656abb16dbf5b248f" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.800797 4959 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.806822 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ft8qw"] Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.808041 4959 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.842716 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.859269 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.910241 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.910288 4959 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:06 crc kubenswrapper[4959]: I0121 14:04:06.931299 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-config-data" (OuterVolumeSpecName: "config-data") pod "7988e7af-bc2e-49a5-bdab-420a34112f4b" (UID: "7988e7af-bc2e-49a5-bdab-420a34112f4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.012863 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7988e7af-bc2e-49a5-bdab-420a34112f4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.069344 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.126825 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.135533 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:04:07 crc kubenswrapper[4959]: E0121 14:04:07.136049 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-httpd" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.136070 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-httpd" Jan 21 14:04:07 crc kubenswrapper[4959]: E0121 14:04:07.136133 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-log" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.136142 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-log" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.140128 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-httpd" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.140167 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" containerName="glance-log" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.142803 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.146970 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.147125 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.147230 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.299388 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7988e7af-bc2e-49a5-bdab-420a34112f4b" path="/var/lib/kubelet/pods/7988e7af-bc2e-49a5-bdab-420a34112f4b/volumes" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322263 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322316 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322343 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322408 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322440 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322476 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfjv\" (UniqueName: \"kubernetes.io/projected/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-kube-api-access-kbfjv\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322529 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322578 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.322595 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.423954 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424065 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424085 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424147 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424182 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424202 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424269 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424293 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.424336 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbfjv\" (UniqueName: \"kubernetes.io/projected/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-kube-api-access-kbfjv\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.426142 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.427200 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.427573 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.429173 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.431224 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.434053 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.436510 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.437356 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.446110 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbfjv\" (UniqueName: \"kubernetes.io/projected/ba76fdb1-6ada-495d-8846-35cd2cb9bb4e-kube-api-access-kbfjv\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.458323 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.532597 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.579742 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:04:07 crc kubenswrapper[4959]: W0121 14:04:07.613008 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dbd37aa_b9e2_4d8b_a249_ea87147b176f.slice/crio-c84d9a15dcd21c8b6055a72ae9f0a3d25e323f0a19988c2cd0f55ac95f7b1955 WatchSource:0}: Error finding container c84d9a15dcd21c8b6055a72ae9f0a3d25e323f0a19988c2cd0f55ac95f7b1955: Status 404 returned error can't find the container with id c84d9a15dcd21c8b6055a72ae9f0a3d25e323f0a19988c2cd0f55ac95f7b1955 Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.759429 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f99fb4f97-btmqc" event={"ID":"6db8d66c-73bb-4ce5-81db-4254e41e78ad","Type":"ContainerStarted","Data":"d6cb9db287b976c79a6dda35e4c1ff728ee847f49d4154216364e2ff5907acc4"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.759507 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f99fb4f97-btmqc" event={"ID":"6db8d66c-73bb-4ce5-81db-4254e41e78ad","Type":"ContainerStarted","Data":"02126e5b8910499451e6c11be45f2c25a4bcc6ae30932766a31b0e89ea106645"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.759643 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f99fb4f97-btmqc" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon-log" containerID="cri-o://02126e5b8910499451e6c11be45f2c25a4bcc6ae30932766a31b0e89ea106645" gracePeriod=30 Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.760205 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f99fb4f97-btmqc" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon" containerID="cri-o://d6cb9db287b976c79a6dda35e4c1ff728ee847f49d4154216364e2ff5907acc4" gracePeriod=30 Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.784517 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc676b89-n65dj" event={"ID":"3f3d4806-6323-4d63-a9bc-0b6c29d95b45","Type":"ContainerStarted","Data":"d76ca83357792d84a6bd63297e8dfe21ee488065dd8c53a700e90fd3b6e0e061"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.784915 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc676b89-n65dj" event={"ID":"3f3d4806-6323-4d63-a9bc-0b6c29d95b45","Type":"ContainerStarted","Data":"9d33e0d02a55ed982cc97134b8b4cc3e790273d959300bc85ec4e4643de9c416"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.785502 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cdc676b89-n65dj" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon-log" containerID="cri-o://9d33e0d02a55ed982cc97134b8b4cc3e790273d959300bc85ec4e4643de9c416" gracePeriod=30 Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.785936 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cdc676b89-n65dj" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon" containerID="cri-o://d76ca83357792d84a6bd63297e8dfe21ee488065dd8c53a700e90fd3b6e0e061" gracePeriod=30 Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.794301 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f99fb4f97-btmqc" podStartSLOduration=3.663428316 podStartE2EDuration="14.794278188s" podCreationTimestamp="2026-01-21 14:03:53 +0000 UTC" firstStartedPulling="2026-01-21 14:03:55.18367772 +0000 UTC m=+3296.146708263" lastFinishedPulling="2026-01-21 14:04:06.314527592 +0000 UTC m=+3307.277558135" observedRunningTime="2026-01-21 14:04:07.779719485 +0000 UTC m=+3308.742750048" watchObservedRunningTime="2026-01-21 14:04:07.794278188 +0000 UTC m=+3308.757308741" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.802252 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ft8qw" event={"ID":"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11","Type":"ContainerStarted","Data":"d0146e7e77a8e3f2123cc599984055bf8854d2efaf59ce67183636ea31ea0f32"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.817278 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7dbd37aa-b9e2-4d8b-a249-ea87147b176f","Type":"ContainerStarted","Data":"c84d9a15dcd21c8b6055a72ae9f0a3d25e323f0a19988c2cd0f55ac95f7b1955"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.819482 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cdc676b89-n65dj" podStartSLOduration=3.820323874 podStartE2EDuration="14.819454721s" podCreationTimestamp="2026-01-21 14:03:53 +0000 UTC" firstStartedPulling="2026-01-21 14:03:55.252570451 +0000 UTC m=+3296.215600994" lastFinishedPulling="2026-01-21 14:04:06.251701298 +0000 UTC m=+3307.214731841" observedRunningTime="2026-01-21 14:04:07.804571739 +0000 UTC m=+3308.767602282" watchObservedRunningTime="2026-01-21 14:04:07.819454721 +0000 UTC m=+3308.782485274" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.823729 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7b8cf98-z8vcx" event={"ID":"79766a29-f585-4567-b158-2506c12277cb","Type":"ContainerStarted","Data":"b2078371e867f50d5316f32ee495239245f28d2e781206d5793d10b94a72f6dd"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.823785 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7b8cf98-z8vcx" event={"ID":"79766a29-f585-4567-b158-2506c12277cb","Type":"ContainerStarted","Data":"aa2b4ee778b0e02df58be72345c2c733eee7697f4ec72f5bf11b2020b3f13230"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.831138 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5b8cfdcd-l422b" event={"ID":"ce473e12-c4b4-48e6-958f-f4416083667a","Type":"ContainerStarted","Data":"2bd8458824c397c0f34cfe672440afdb7eade7f21aacf3b99c906c5b243ca97e"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.831183 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5b8cfdcd-l422b" event={"ID":"ce473e12-c4b4-48e6-958f-f4416083667a","Type":"ContainerStarted","Data":"437127343c566bce0b0697fe89282e76a91554fcea79726985bb7ff8aad47a9e"} Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.867750 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77d7b8cf98-z8vcx" podStartSLOduration=3.208187462 podStartE2EDuration="10.867732382s" podCreationTimestamp="2026-01-21 14:03:57 +0000 UTC" firstStartedPulling="2026-01-21 14:03:58.603889287 +0000 UTC m=+3299.566919820" lastFinishedPulling="2026-01-21 14:04:06.263434197 +0000 UTC m=+3307.226464740" observedRunningTime="2026-01-21 14:04:07.852775338 +0000 UTC m=+3308.815805901" watchObservedRunningTime="2026-01-21 14:04:07.867732382 +0000 UTC m=+3308.830762925" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.882135 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c5b8cfdcd-l422b" podStartSLOduration=2.906296383 podStartE2EDuration="10.8821127s" podCreationTimestamp="2026-01-21 14:03:57 +0000 UTC" firstStartedPulling="2026-01-21 14:03:58.33852463 +0000 UTC m=+3299.301555173" lastFinishedPulling="2026-01-21 14:04:06.314340947 +0000 UTC m=+3307.277371490" observedRunningTime="2026-01-21 14:04:07.873549545 +0000 UTC m=+3308.836580088" watchObservedRunningTime="2026-01-21 14:04:07.8821127 +0000 UTC m=+3308.845143243" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.964170 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:04:07 crc kubenswrapper[4959]: I0121 14:04:07.964238 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:04:08 crc kubenswrapper[4959]: I0121 14:04:08.197836 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:04:08 crc kubenswrapper[4959]: I0121 14:04:08.841786 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e","Type":"ContainerStarted","Data":"6909746fe2687be9e70071754dbe07e07dba9f365481bac1729480b88d297ed9"} Jan 21 14:04:08 crc kubenswrapper[4959]: I0121 14:04:08.852652 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7dbd37aa-b9e2-4d8b-a249-ea87147b176f","Type":"ContainerStarted","Data":"a388211049b0f871879946986b42922b27831d4f23ad707d6f7768493825ebe6"} Jan 21 14:04:09 crc kubenswrapper[4959]: I0121 14:04:09.869049 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7dbd37aa-b9e2-4d8b-a249-ea87147b176f","Type":"ContainerStarted","Data":"9595a953b15e643799da7bef65fd5570a5c3ed9e70b9cc600673f7e8fc1ac3ff"} Jan 21 14:04:09 crc kubenswrapper[4959]: I0121 14:04:09.870941 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e","Type":"ContainerStarted","Data":"a7cc34fc80ead682517ef9b0572d7dc1995b47970f089420cd00732af043b987"} Jan 21 14:04:09 crc kubenswrapper[4959]: I0121 14:04:09.870988 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ba76fdb1-6ada-495d-8846-35cd2cb9bb4e","Type":"ContainerStarted","Data":"9e6b091bb5df96cff38137f13edf7a3e218e061d6a430b21399810b77524dff3"} Jan 21 14:04:09 crc kubenswrapper[4959]: I0121 14:04:09.904666 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.904640524 podStartE2EDuration="9.904640524s" podCreationTimestamp="2026-01-21 14:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:04:09.891786735 +0000 UTC m=+3310.854817288" watchObservedRunningTime="2026-01-21 14:04:09.904640524 +0000 UTC m=+3310.867671067" Jan 21 14:04:10 crc kubenswrapper[4959]: I0121 14:04:10.904107 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.904075364 podStartE2EDuration="3.904075364s" podCreationTimestamp="2026-01-21 14:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:04:10.903081668 +0000 UTC m=+3311.866112231" watchObservedRunningTime="2026-01-21 14:04:10.904075364 +0000 UTC m=+3311.867105907" Jan 21 14:04:11 crc kubenswrapper[4959]: I0121 14:04:11.094014 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:04:11 crc kubenswrapper[4959]: I0121 14:04:11.094136 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:04:11 crc kubenswrapper[4959]: I0121 14:04:11.133135 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:04:11 crc kubenswrapper[4959]: I0121 14:04:11.175579 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:04:11 crc kubenswrapper[4959]: I0121 14:04:11.892451 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:04:11 crc kubenswrapper[4959]: I0121 14:04:11.892503 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:04:14 crc kubenswrapper[4959]: I0121 14:04:14.176802 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:04:14 crc kubenswrapper[4959]: I0121 14:04:14.411923 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:04:14 crc kubenswrapper[4959]: I0121 14:04:14.938467 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ft8qw" event={"ID":"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11","Type":"ContainerStarted","Data":"1ef790c1e4f5dc357b8103435b6560d63b66ba0977ba6140a9b156b1fa8cfc18"} Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.533295 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.533588 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.565505 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.590317 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.600427 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-ft8qw" podStartSLOduration=6.853913424 podStartE2EDuration="13.600405468s" podCreationTimestamp="2026-01-21 14:04:04 +0000 UTC" firstStartedPulling="2026-01-21 14:04:06.825507874 +0000 UTC m=+3307.788538417" lastFinishedPulling="2026-01-21 14:04:13.571999918 +0000 UTC m=+3314.535030461" observedRunningTime="2026-01-21 14:04:14.956930778 +0000 UTC m=+3315.919961321" watchObservedRunningTime="2026-01-21 14:04:17.600405468 +0000 UTC m=+3318.563436021" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.638469 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.638535 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.640294 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.965148 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77d7b8cf98-z8vcx" podUID="79766a29-f585-4567-b158-2506c12277cb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.965723 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:17 crc kubenswrapper[4959]: I0121 14:04:17.965751 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:19 crc kubenswrapper[4959]: I0121 14:04:19.978752 4959 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:04:19 crc kubenswrapper[4959]: I0121 14:04:19.979053 4959 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:04:20 crc kubenswrapper[4959]: I0121 14:04:20.690796 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:20 crc kubenswrapper[4959]: I0121 14:04:20.707297 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.492701 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qz72x"] Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.497226 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.545046 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qz72x"] Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.564946 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-utilities\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.565385 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25728\" (UniqueName: \"kubernetes.io/projected/19df0d94-f175-4339-ad82-e68d078070de-kube-api-access-25728\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.565631 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-catalog-content\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.667661 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-utilities\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.667970 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25728\" (UniqueName: \"kubernetes.io/projected/19df0d94-f175-4339-ad82-e68d078070de-kube-api-access-25728\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.668073 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-catalog-content\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.668633 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-catalog-content\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.668893 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-utilities\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.713087 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25728\" (UniqueName: \"kubernetes.io/projected/19df0d94-f175-4339-ad82-e68d078070de-kube-api-access-25728\") pod \"redhat-operators-qz72x\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:26 crc kubenswrapper[4959]: I0121 14:04:26.864670 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:27 crc kubenswrapper[4959]: I0121 14:04:27.060092 4959 generic.go:334] "Generic (PLEG): container finished" podID="4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" containerID="1ef790c1e4f5dc357b8103435b6560d63b66ba0977ba6140a9b156b1fa8cfc18" exitCode=0 Jan 21 14:04:27 crc kubenswrapper[4959]: I0121 14:04:27.060147 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ft8qw" event={"ID":"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11","Type":"ContainerDied","Data":"1ef790c1e4f5dc357b8103435b6560d63b66ba0977ba6140a9b156b1fa8cfc18"} Jan 21 14:04:27 crc kubenswrapper[4959]: I0121 14:04:27.412417 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qz72x"] Jan 21 14:04:27 crc kubenswrapper[4959]: I0121 14:04:27.638856 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 21 14:04:27 crc kubenswrapper[4959]: I0121 14:04:27.965255 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77d7b8cf98-z8vcx" podUID="79766a29-f585-4567-b158-2506c12277cb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.076007 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerStarted","Data":"5849388b9b85fd7bfa53d5be67d6de0b8124362199151969ac42af8230b23987"} Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.632746 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.705810 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-config-data\") pod \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.705925 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj68d\" (UniqueName: \"kubernetes.io/projected/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-kube-api-access-jj68d\") pod \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.706009 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-job-config-data\") pod \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.706084 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-combined-ca-bundle\") pod \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\" (UID: \"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11\") " Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.711817 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" (UID: "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.713309 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-kube-api-access-jj68d" (OuterVolumeSpecName: "kube-api-access-jj68d") pod "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" (UID: "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11"). InnerVolumeSpecName "kube-api-access-jj68d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.716369 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-config-data" (OuterVolumeSpecName: "config-data") pod "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" (UID: "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.741221 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" (UID: "4dd45b2a-8a39-40f9-9da8-dc9f1330cc11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.809011 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.809048 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj68d\" (UniqueName: \"kubernetes.io/projected/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-kube-api-access-jj68d\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.809057 4959 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:28 crc kubenswrapper[4959]: I0121 14:04:28.809066 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.086835 4959 generic.go:334] "Generic (PLEG): container finished" podID="19df0d94-f175-4339-ad82-e68d078070de" containerID="b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be" exitCode=0 Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.087168 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerDied","Data":"b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be"} Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.088556 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ft8qw" event={"ID":"4dd45b2a-8a39-40f9-9da8-dc9f1330cc11","Type":"ContainerDied","Data":"d0146e7e77a8e3f2123cc599984055bf8854d2efaf59ce67183636ea31ea0f32"} Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.088587 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ft8qw" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.088596 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0146e7e77a8e3f2123cc599984055bf8854d2efaf59ce67183636ea31ea0f32" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.343545 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:29 crc kubenswrapper[4959]: E0121 14:04:29.344027 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" containerName="manila-db-sync" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.344048 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" containerName="manila-db-sync" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.344397 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" containerName="manila-db-sync" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.345614 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.348533 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.348791 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.349298 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.355305 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-pwgzk" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.368755 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.420361 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-scripts\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.420425 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqk8\" (UniqueName: \"kubernetes.io/projected/3101b127-ec2a-4baf-94fa-799b831a5aed-kube-api-access-7zqk8\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.420454 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.420576 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.420624 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3101b127-ec2a-4baf-94fa-799b831a5aed-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.420713 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.448887 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.450633 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.453584 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.473604 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.506445 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-g8mkx"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.508107 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.536944 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.536999 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537024 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-scripts\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537056 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flj7\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-kube-api-access-7flj7\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537079 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537105 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537152 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537214 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537258 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfkrq\" (UniqueName: \"kubernetes.io/projected/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-kube-api-access-kfkrq\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537287 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-scripts\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537326 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqk8\" (UniqueName: \"kubernetes.io/projected/3101b127-ec2a-4baf-94fa-799b831a5aed-kube-api-access-7zqk8\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537350 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537426 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537506 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-config\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537583 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-ceph\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537614 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537635 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537658 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537702 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537727 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3101b127-ec2a-4baf-94fa-799b831a5aed-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.537896 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3101b127-ec2a-4baf-94fa-799b831a5aed-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.559796 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-scripts\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.561230 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.569189 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.578721 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-g8mkx"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.580582 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.598497 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqk8\" (UniqueName: \"kubernetes.io/projected/3101b127-ec2a-4baf-94fa-799b831a5aed-kube-api-access-7zqk8\") pod \"manila-scheduler-0\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657216 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-ceph\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657481 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657511 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657540 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657611 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657628 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-scripts\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657655 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flj7\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-kube-api-access-7flj7\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657671 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657687 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657706 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657731 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657754 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfkrq\" (UniqueName: \"kubernetes.io/projected/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-kube-api-access-kfkrq\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657788 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.657831 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-config\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.658783 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-config\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.664759 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.665336 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.665906 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.665965 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.666368 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.666899 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.667752 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.672405 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-ceph\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.679926 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.682863 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-scripts\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.688304 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flj7\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-kube-api-access-7flj7\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.698001 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.700495 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.704100 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfkrq\" (UniqueName: \"kubernetes.io/projected/b2f855a6-8d49-4c3c-97c2-ce1aee877c27-kube-api-access-kfkrq\") pod \"dnsmasq-dns-69655fd4bf-g8mkx\" (UID: \"b2f855a6-8d49-4c3c-97c2-ce1aee877c27\") " pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.769014 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.814594 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.817324 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.823638 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.833390 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.849363 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.866782 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhkh\" (UniqueName: \"kubernetes.io/projected/b3f2df34-4101-4b74-b832-ca34fb11e70c-kube-api-access-ckhkh\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.866827 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-scripts\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.866861 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f2df34-4101-4b74-b832-ca34fb11e70c-etc-machine-id\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.866908 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.866997 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.867027 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data-custom\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.867054 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f2df34-4101-4b74-b832-ca34fb11e70c-logs\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.977475 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.977832 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data-custom\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.977865 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f2df34-4101-4b74-b832-ca34fb11e70c-logs\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.977939 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckhkh\" (UniqueName: \"kubernetes.io/projected/b3f2df34-4101-4b74-b832-ca34fb11e70c-kube-api-access-ckhkh\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.977965 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-scripts\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.977995 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f2df34-4101-4b74-b832-ca34fb11e70c-etc-machine-id\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.978044 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.979308 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f2df34-4101-4b74-b832-ca34fb11e70c-logs\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.991131 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.991825 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f2df34-4101-4b74-b832-ca34fb11e70c-etc-machine-id\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:29 crc kubenswrapper[4959]: I0121 14:04:29.999607 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-scripts\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.000223 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data-custom\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.000895 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.018680 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckhkh\" (UniqueName: \"kubernetes.io/projected/b3f2df34-4101-4b74-b832-ca34fb11e70c-kube-api-access-ckhkh\") pod \"manila-api-0\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " pod="openstack/manila-api-0" Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.192644 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.363939 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:30 crc kubenswrapper[4959]: W0121 14:04:30.381977 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3101b127_ec2a_4baf_94fa_799b831a5aed.slice/crio-0970b491d26d9a3db5635315453359cafc8d0a0a2933918f103fae1ff2587a3e WatchSource:0}: Error finding container 0970b491d26d9a3db5635315453359cafc8d0a0a2933918f103fae1ff2587a3e: Status 404 returned error can't find the container with id 0970b491d26d9a3db5635315453359cafc8d0a0a2933918f103fae1ff2587a3e Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.559600 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.677155 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-g8mkx"] Jan 21 14:04:30 crc kubenswrapper[4959]: W0121 14:04:30.692188 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2f855a6_8d49_4c3c_97c2_ce1aee877c27.slice/crio-b872c747b803796996c139917b2b39fe2cb32d886df66c7088bb2366d155bb30 WatchSource:0}: Error finding container b872c747b803796996c139917b2b39fe2cb32d886df66c7088bb2366d155bb30: Status 404 returned error can't find the container with id b872c747b803796996c139917b2b39fe2cb32d886df66c7088bb2366d155bb30 Jan 21 14:04:30 crc kubenswrapper[4959]: I0121 14:04:30.983496 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:31 crc kubenswrapper[4959]: W0121 14:04:31.074466 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3f2df34_4101_4b74_b832_ca34fb11e70c.slice/crio-baac0660e04e3fdc3b13a6aeb3fb675430fe9c689c7d24183bd6574afe13dd35 WatchSource:0}: Error finding container baac0660e04e3fdc3b13a6aeb3fb675430fe9c689c7d24183bd6574afe13dd35: Status 404 returned error can't find the container with id baac0660e04e3fdc3b13a6aeb3fb675430fe9c689c7d24183bd6574afe13dd35 Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.118171 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"68bfc54e-7f7c-4360-ab91-00c9ce7bf357","Type":"ContainerStarted","Data":"516789d9dc98565ab7112dbef2f801cd7c84e37654a9d7b40835e5df6a484c64"} Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.141699 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerStarted","Data":"bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e"} Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.150823 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3101b127-ec2a-4baf-94fa-799b831a5aed","Type":"ContainerStarted","Data":"0970b491d26d9a3db5635315453359cafc8d0a0a2933918f103fae1ff2587a3e"} Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.153443 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b3f2df34-4101-4b74-b832-ca34fb11e70c","Type":"ContainerStarted","Data":"baac0660e04e3fdc3b13a6aeb3fb675430fe9c689c7d24183bd6574afe13dd35"} Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.157904 4959 generic.go:334] "Generic (PLEG): container finished" podID="b2f855a6-8d49-4c3c-97c2-ce1aee877c27" containerID="830818745c9e99effb3e2ac91a74d8745e148e2ff450021e83e804cbe1d3b442" exitCode=0 Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.157957 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" event={"ID":"b2f855a6-8d49-4c3c-97c2-ce1aee877c27","Type":"ContainerDied","Data":"830818745c9e99effb3e2ac91a74d8745e148e2ff450021e83e804cbe1d3b442"} Jan 21 14:04:31 crc kubenswrapper[4959]: I0121 14:04:31.157985 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" event={"ID":"b2f855a6-8d49-4c3c-97c2-ce1aee877c27","Type":"ContainerStarted","Data":"b872c747b803796996c139917b2b39fe2cb32d886df66c7088bb2366d155bb30"} Jan 21 14:04:32 crc kubenswrapper[4959]: I0121 14:04:32.169406 4959 generic.go:334] "Generic (PLEG): container finished" podID="19df0d94-f175-4339-ad82-e68d078070de" containerID="bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e" exitCode=0 Jan 21 14:04:32 crc kubenswrapper[4959]: I0121 14:04:32.169636 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerDied","Data":"bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e"} Jan 21 14:04:32 crc kubenswrapper[4959]: I0121 14:04:32.175057 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" event={"ID":"b2f855a6-8d49-4c3c-97c2-ce1aee877c27","Type":"ContainerStarted","Data":"e544f4ad40cdd0bf3f29c242c0fad1a5e58e2f24f115b3680f81254b3a99a727"} Jan 21 14:04:32 crc kubenswrapper[4959]: I0121 14:04:32.175376 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:32 crc kubenswrapper[4959]: I0121 14:04:32.546702 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" podStartSLOduration=3.546679074 podStartE2EDuration="3.546679074s" podCreationTimestamp="2026-01-21 14:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:04:32.208668716 +0000 UTC m=+3333.171699259" watchObservedRunningTime="2026-01-21 14:04:32.546679074 +0000 UTC m=+3333.509709627" Jan 21 14:04:32 crc kubenswrapper[4959]: I0121 14:04:32.549526 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:33 crc kubenswrapper[4959]: I0121 14:04:33.201643 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3101b127-ec2a-4baf-94fa-799b831a5aed","Type":"ContainerStarted","Data":"ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2"} Jan 21 14:04:33 crc kubenswrapper[4959]: I0121 14:04:33.205812 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b3f2df34-4101-4b74-b832-ca34fb11e70c","Type":"ContainerStarted","Data":"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0"} Jan 21 14:04:34 crc kubenswrapper[4959]: I0121 14:04:34.226426 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3101b127-ec2a-4baf-94fa-799b831a5aed","Type":"ContainerStarted","Data":"2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8"} Jan 21 14:04:34 crc kubenswrapper[4959]: I0121 14:04:34.384885 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:04:34 crc kubenswrapper[4959]: I0121 14:04:34.437381 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.237745 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerStarted","Data":"05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde"} Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.240949 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api-log" containerID="cri-o://52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0" gracePeriod=30 Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.241302 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b3f2df34-4101-4b74-b832-ca34fb11e70c","Type":"ContainerStarted","Data":"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b"} Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.241682 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.241759 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api" containerID="cri-o://6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b" gracePeriod=30 Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.259547 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qz72x" podStartSLOduration=5.501727825 podStartE2EDuration="9.259525311s" podCreationTimestamp="2026-01-21 14:04:26 +0000 UTC" firstStartedPulling="2026-01-21 14:04:29.08901746 +0000 UTC m=+3330.052048003" lastFinishedPulling="2026-01-21 14:04:32.846814946 +0000 UTC m=+3333.809845489" observedRunningTime="2026-01-21 14:04:35.256665186 +0000 UTC m=+3336.219695749" watchObservedRunningTime="2026-01-21 14:04:35.259525311 +0000 UTC m=+3336.222555854" Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.288968 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.288946935 podStartE2EDuration="6.288946935s" podCreationTimestamp="2026-01-21 14:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:04:35.27923053 +0000 UTC m=+3336.242261103" watchObservedRunningTime="2026-01-21 14:04:35.288946935 +0000 UTC m=+3336.251977478" Jan 21 14:04:35 crc kubenswrapper[4959]: I0121 14:04:35.315443 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.65432672 podStartE2EDuration="6.315424142s" podCreationTimestamp="2026-01-21 14:04:29 +0000 UTC" firstStartedPulling="2026-01-21 14:04:30.40444009 +0000 UTC m=+3331.367470633" lastFinishedPulling="2026-01-21 14:04:31.065537512 +0000 UTC m=+3332.028568055" observedRunningTime="2026-01-21 14:04:35.303984791 +0000 UTC m=+3336.267015334" watchObservedRunningTime="2026-01-21 14:04:35.315424142 +0000 UTC m=+3336.278454685" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.096814 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.199805 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data-custom\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.199940 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-scripts\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.199982 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f2df34-4101-4b74-b832-ca34fb11e70c-logs\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.200132 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckhkh\" (UniqueName: \"kubernetes.io/projected/b3f2df34-4101-4b74-b832-ca34fb11e70c-kube-api-access-ckhkh\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.200217 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f2df34-4101-4b74-b832-ca34fb11e70c-etc-machine-id\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.200238 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-combined-ca-bundle\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.200287 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data\") pod \"b3f2df34-4101-4b74-b832-ca34fb11e70c\" (UID: \"b3f2df34-4101-4b74-b832-ca34fb11e70c\") " Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.200378 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3f2df34-4101-4b74-b832-ca34fb11e70c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.200598 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f2df34-4101-4b74-b832-ca34fb11e70c-logs" (OuterVolumeSpecName: "logs") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.201398 4959 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3f2df34-4101-4b74-b832-ca34fb11e70c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.201420 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f2df34-4101-4b74-b832-ca34fb11e70c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.207003 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f2df34-4101-4b74-b832-ca34fb11e70c-kube-api-access-ckhkh" (OuterVolumeSpecName: "kube-api-access-ckhkh") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "kube-api-access-ckhkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.210247 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-scripts" (OuterVolumeSpecName: "scripts") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.211449 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.237607 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.253499 4959 generic.go:334] "Generic (PLEG): container finished" podID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerID="6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b" exitCode=0 Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.253540 4959 generic.go:334] "Generic (PLEG): container finished" podID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerID="52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0" exitCode=143 Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.253589 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b3f2df34-4101-4b74-b832-ca34fb11e70c","Type":"ContainerDied","Data":"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b"} Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.253675 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b3f2df34-4101-4b74-b832-ca34fb11e70c","Type":"ContainerDied","Data":"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0"} Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.253693 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"b3f2df34-4101-4b74-b832-ca34fb11e70c","Type":"ContainerDied","Data":"baac0660e04e3fdc3b13a6aeb3fb675430fe9c689c7d24183bd6574afe13dd35"} Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.253714 4959 scope.go:117] "RemoveContainer" containerID="6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.254772 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.273258 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data" (OuterVolumeSpecName: "config-data") pod "b3f2df34-4101-4b74-b832-ca34fb11e70c" (UID: "b3f2df34-4101-4b74-b832-ca34fb11e70c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.303571 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.303616 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckhkh\" (UniqueName: \"kubernetes.io/projected/b3f2df34-4101-4b74-b832-ca34fb11e70c-kube-api-access-ckhkh\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.303662 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.303672 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.303690 4959 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3f2df34-4101-4b74-b832-ca34fb11e70c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.305733 4959 scope.go:117] "RemoveContainer" containerID="52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.333327 4959 scope.go:117] "RemoveContainer" containerID="6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b" Jan 21 14:04:36 crc kubenswrapper[4959]: E0121 14:04:36.334392 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b\": container with ID starting with 6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b not found: ID does not exist" containerID="6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.334444 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b"} err="failed to get container status \"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b\": rpc error: code = NotFound desc = could not find container \"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b\": container with ID starting with 6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b not found: ID does not exist" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.334470 4959 scope.go:117] "RemoveContainer" containerID="52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0" Jan 21 14:04:36 crc kubenswrapper[4959]: E0121 14:04:36.335163 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0\": container with ID starting with 52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0 not found: ID does not exist" containerID="52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.335192 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0"} err="failed to get container status \"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0\": rpc error: code = NotFound desc = could not find container \"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0\": container with ID starting with 52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0 not found: ID does not exist" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.335208 4959 scope.go:117] "RemoveContainer" containerID="6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.335484 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b"} err="failed to get container status \"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b\": rpc error: code = NotFound desc = could not find container \"6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b\": container with ID starting with 6fb5d55462d20f4d84f00171182b86675306447556ba894caa8faed5e285389b not found: ID does not exist" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.335510 4959 scope.go:117] "RemoveContainer" containerID="52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.335754 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0"} err="failed to get container status \"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0\": rpc error: code = NotFound desc = could not find container \"52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0\": container with ID starting with 52921439a37bc682ffcc9db997afb22bc655bc6874144db58bf8226d6831e3d0 not found: ID does not exist" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.596201 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.618332 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.635852 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:36 crc kubenswrapper[4959]: E0121 14:04:36.636438 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api-log" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.636456 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api-log" Jan 21 14:04:36 crc kubenswrapper[4959]: E0121 14:04:36.636481 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.636489 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.636693 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api-log" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.636715 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" containerName="manila-api" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.637712 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.640301 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.640578 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.640819 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.651140 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.710929 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.711015 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-scripts\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.711060 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-config-data\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.711112 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-config-data-custom\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.711160 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6d5e362-311d-4ea4-97bf-c1550267ab81-etc-machine-id\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.711231 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6d5e362-311d-4ea4-97bf-c1550267ab81-logs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.711255 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54w2\" (UniqueName: \"kubernetes.io/projected/f6d5e362-311d-4ea4-97bf-c1550267ab81-kube-api-access-v54w2\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.714307 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-public-tls-certs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.714486 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815687 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54w2\" (UniqueName: \"kubernetes.io/projected/f6d5e362-311d-4ea4-97bf-c1550267ab81-kube-api-access-v54w2\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815741 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-public-tls-certs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815826 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815856 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815901 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-scripts\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815931 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-config-data\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815955 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-config-data-custom\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.815983 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6d5e362-311d-4ea4-97bf-c1550267ab81-etc-machine-id\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.816033 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6d5e362-311d-4ea4-97bf-c1550267ab81-logs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.816154 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6d5e362-311d-4ea4-97bf-c1550267ab81-etc-machine-id\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.816759 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6d5e362-311d-4ea4-97bf-c1550267ab81-logs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.821883 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.821929 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.824825 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-scripts\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.825029 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-public-tls-certs\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.826264 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-config-data-custom\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.844300 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5e362-311d-4ea4-97bf-c1550267ab81-config-data\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.844524 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54w2\" (UniqueName: \"kubernetes.io/projected/f6d5e362-311d-4ea4-97bf-c1550267ab81-kube-api-access-v54w2\") pod \"manila-api-0\" (UID: \"f6d5e362-311d-4ea4-97bf-c1550267ab81\") " pod="openstack/manila-api-0" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.866285 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.866368 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:36 crc kubenswrapper[4959]: I0121 14:04:36.996348 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 14:04:37 crc kubenswrapper[4959]: I0121 14:04:37.304351 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f2df34-4101-4b74-b832-ca34fb11e70c" path="/var/lib/kubelet/pods/b3f2df34-4101-4b74-b832-ca34fb11e70c/volumes" Jan 21 14:04:37 crc kubenswrapper[4959]: I0121 14:04:37.928880 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qz72x" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="registry-server" probeResult="failure" output=< Jan 21 14:04:37 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 14:04:37 crc kubenswrapper[4959]: > Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.278029 4959 generic.go:334] "Generic (PLEG): container finished" podID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerID="d76ca83357792d84a6bd63297e8dfe21ee488065dd8c53a700e90fd3b6e0e061" exitCode=137 Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.278070 4959 generic.go:334] "Generic (PLEG): container finished" podID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerID="9d33e0d02a55ed982cc97134b8b4cc3e790273d959300bc85ec4e4643de9c416" exitCode=137 Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.278138 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc676b89-n65dj" event={"ID":"3f3d4806-6323-4d63-a9bc-0b6c29d95b45","Type":"ContainerDied","Data":"d76ca83357792d84a6bd63297e8dfe21ee488065dd8c53a700e90fd3b6e0e061"} Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.278167 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc676b89-n65dj" event={"ID":"3f3d4806-6323-4d63-a9bc-0b6c29d95b45","Type":"ContainerDied","Data":"9d33e0d02a55ed982cc97134b8b4cc3e790273d959300bc85ec4e4643de9c416"} Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.280696 4959 generic.go:334] "Generic (PLEG): container finished" podID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerID="d6cb9db287b976c79a6dda35e4c1ff728ee847f49d4154216364e2ff5907acc4" exitCode=137 Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.280744 4959 generic.go:334] "Generic (PLEG): container finished" podID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerID="02126e5b8910499451e6c11be45f2c25a4bcc6ae30932766a31b0e89ea106645" exitCode=137 Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.280761 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f99fb4f97-btmqc" event={"ID":"6db8d66c-73bb-4ce5-81db-4254e41e78ad","Type":"ContainerDied","Data":"d6cb9db287b976c79a6dda35e4c1ff728ee847f49d4154216364e2ff5907acc4"} Jan 21 14:04:38 crc kubenswrapper[4959]: I0121 14:04:38.280786 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f99fb4f97-btmqc" event={"ID":"6db8d66c-73bb-4ce5-81db-4254e41e78ad","Type":"ContainerDied","Data":"02126e5b8910499451e6c11be45f2c25a4bcc6ae30932766a31b0e89ea106645"} Jan 21 14:04:39 crc kubenswrapper[4959]: I0121 14:04:39.738713 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 14:04:39 crc kubenswrapper[4959]: I0121 14:04:39.836430 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-g8mkx" Jan 21 14:04:39 crc kubenswrapper[4959]: I0121 14:04:39.899189 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-l8dd8"] Jan 21 14:04:39 crc kubenswrapper[4959]: I0121 14:04:39.899481 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="dnsmasq-dns" containerID="cri-o://3d2a628cc215838558b7e209f43ee7c92f7379750aa178f0a7a44d784ffcf9d5" gracePeriod=10 Jan 21 14:04:40 crc kubenswrapper[4959]: I0121 14:04:40.303890 4959 generic.go:334] "Generic (PLEG): container finished" podID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerID="3d2a628cc215838558b7e209f43ee7c92f7379750aa178f0a7a44d784ffcf9d5" exitCode=0 Jan 21 14:04:40 crc kubenswrapper[4959]: I0121 14:04:40.303941 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" event={"ID":"4132df39-cbe0-451d-9eae-39ea25e6ce18","Type":"ContainerDied","Data":"3d2a628cc215838558b7e209f43ee7c92f7379750aa178f0a7a44d784ffcf9d5"} Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.322684 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.509949 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.592626 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.621194 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-sb\") pod \"4132df39-cbe0-451d-9eae-39ea25e6ce18\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.621236 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-nb\") pod \"4132df39-cbe0-451d-9eae-39ea25e6ce18\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.621276 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g299r\" (UniqueName: \"kubernetes.io/projected/4132df39-cbe0-451d-9eae-39ea25e6ce18-kube-api-access-g299r\") pod \"4132df39-cbe0-451d-9eae-39ea25e6ce18\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.621296 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-config\") pod \"4132df39-cbe0-451d-9eae-39ea25e6ce18\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.621336 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-dns-svc\") pod \"4132df39-cbe0-451d-9eae-39ea25e6ce18\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.621358 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-openstack-edpm-ipam\") pod \"4132df39-cbe0-451d-9eae-39ea25e6ce18\" (UID: \"4132df39-cbe0-451d-9eae-39ea25e6ce18\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.630357 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4132df39-cbe0-451d-9eae-39ea25e6ce18-kube-api-access-g299r" (OuterVolumeSpecName: "kube-api-access-g299r") pod "4132df39-cbe0-451d-9eae-39ea25e6ce18" (UID: "4132df39-cbe0-451d-9eae-39ea25e6ce18"). InnerVolumeSpecName "kube-api-access-g299r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.721160 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4132df39-cbe0-451d-9eae-39ea25e6ce18" (UID: "4132df39-cbe0-451d-9eae-39ea25e6ce18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.723453 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g299r\" (UniqueName: \"kubernetes.io/projected/4132df39-cbe0-451d-9eae-39ea25e6ce18-kube-api-access-g299r\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.723576 4959 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.740768 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4132df39-cbe0-451d-9eae-39ea25e6ce18" (UID: "4132df39-cbe0-451d-9eae-39ea25e6ce18"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.750807 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4132df39-cbe0-451d-9eae-39ea25e6ce18" (UID: "4132df39-cbe0-451d-9eae-39ea25e6ce18"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.753828 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-config" (OuterVolumeSpecName: "config") pod "4132df39-cbe0-451d-9eae-39ea25e6ce18" (UID: "4132df39-cbe0-451d-9eae-39ea25e6ce18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.779516 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4132df39-cbe0-451d-9eae-39ea25e6ce18" (UID: "4132df39-cbe0-451d-9eae-39ea25e6ce18"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.827370 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.827398 4959 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.827406 4959 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.827418 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4132df39-cbe0-451d-9eae-39ea25e6ce18-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.834623 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.872255 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.928852 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6db8d66c-73bb-4ce5-81db-4254e41e78ad-logs\") pod \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.928905 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-config-data\") pod \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.928958 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-scripts\") pod \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929062 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkp8\" (UniqueName: \"kubernetes.io/projected/6db8d66c-73bb-4ce5-81db-4254e41e78ad-kube-api-access-jhkp8\") pod \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929123 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzvq5\" (UniqueName: \"kubernetes.io/projected/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-kube-api-access-kzvq5\") pod \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929164 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-logs\") pod \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929185 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-config-data\") pod \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929223 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-horizon-secret-key\") pod \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\" (UID: \"3f3d4806-6323-4d63-a9bc-0b6c29d95b45\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929277 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6db8d66c-73bb-4ce5-81db-4254e41e78ad-horizon-secret-key\") pod \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.929293 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-scripts\") pod \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\" (UID: \"6db8d66c-73bb-4ce5-81db-4254e41e78ad\") " Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.930111 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db8d66c-73bb-4ce5-81db-4254e41e78ad-logs" (OuterVolumeSpecName: "logs") pod "6db8d66c-73bb-4ce5-81db-4254e41e78ad" (UID: "6db8d66c-73bb-4ce5-81db-4254e41e78ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.932768 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-logs" (OuterVolumeSpecName: "logs") pod "3f3d4806-6323-4d63-a9bc-0b6c29d95b45" (UID: "3f3d4806-6323-4d63-a9bc-0b6c29d95b45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.944049 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db8d66c-73bb-4ce5-81db-4254e41e78ad-kube-api-access-jhkp8" (OuterVolumeSpecName: "kube-api-access-jhkp8") pod "6db8d66c-73bb-4ce5-81db-4254e41e78ad" (UID: "6db8d66c-73bb-4ce5-81db-4254e41e78ad"). InnerVolumeSpecName "kube-api-access-jhkp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.959220 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6db8d66c-73bb-4ce5-81db-4254e41e78ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6db8d66c-73bb-4ce5-81db-4254e41e78ad" (UID: "6db8d66c-73bb-4ce5-81db-4254e41e78ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.969296 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3f3d4806-6323-4d63-a9bc-0b6c29d95b45" (UID: "3f3d4806-6323-4d63-a9bc-0b6c29d95b45"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:41 crc kubenswrapper[4959]: I0121 14:04:41.974230 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-kube-api-access-kzvq5" (OuterVolumeSpecName: "kube-api-access-kzvq5") pod "3f3d4806-6323-4d63-a9bc-0b6c29d95b45" (UID: "3f3d4806-6323-4d63-a9bc-0b6c29d95b45"). InnerVolumeSpecName "kube-api-access-kzvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.003386 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-config-data" (OuterVolumeSpecName: "config-data") pod "6db8d66c-73bb-4ce5-81db-4254e41e78ad" (UID: "6db8d66c-73bb-4ce5-81db-4254e41e78ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.008446 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-scripts" (OuterVolumeSpecName: "scripts") pod "3f3d4806-6323-4d63-a9bc-0b6c29d95b45" (UID: "3f3d4806-6323-4d63-a9bc-0b6c29d95b45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.014194 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-config-data" (OuterVolumeSpecName: "config-data") pod "3f3d4806-6323-4d63-a9bc-0b6c29d95b45" (UID: "3f3d4806-6323-4d63-a9bc-0b6c29d95b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.017474 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-scripts" (OuterVolumeSpecName: "scripts") pod "6db8d66c-73bb-4ce5-81db-4254e41e78ad" (UID: "6db8d66c-73bb-4ce5-81db-4254e41e78ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035238 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6db8d66c-73bb-4ce5-81db-4254e41e78ad-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035279 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035293 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035303 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkp8\" (UniqueName: \"kubernetes.io/projected/6db8d66c-73bb-4ce5-81db-4254e41e78ad-kube-api-access-jhkp8\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035317 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzvq5\" (UniqueName: \"kubernetes.io/projected/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-kube-api-access-kzvq5\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035327 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035337 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035347 4959 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f3d4806-6323-4d63-a9bc-0b6c29d95b45-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035357 4959 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6db8d66c-73bb-4ce5-81db-4254e41e78ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.035368 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6db8d66c-73bb-4ce5-81db-4254e41e78ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.168329 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.400211 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f99fb4f97-btmqc" event={"ID":"6db8d66c-73bb-4ce5-81db-4254e41e78ad","Type":"ContainerDied","Data":"e5c6b4c9c82d8a3d9508035fb804a6e7f1523f0f40d01f3c2dd867e9c25b4524"} Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.400487 4959 scope.go:117] "RemoveContainer" containerID="d6cb9db287b976c79a6dda35e4c1ff728ee847f49d4154216364e2ff5907acc4" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.400609 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f99fb4f97-btmqc" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.404137 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f6d5e362-311d-4ea4-97bf-c1550267ab81","Type":"ContainerStarted","Data":"466dafdc350613d963df130f0fa79517fb503d169060cb2a9dc248e67892c416"} Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.409631 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" event={"ID":"4132df39-cbe0-451d-9eae-39ea25e6ce18","Type":"ContainerDied","Data":"50b50d08adb282de5fbd841a9ef90ba2fab8b5cfa5b8928b7ecbdfdbfbc399dd"} Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.409805 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.412029 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"68bfc54e-7f7c-4360-ab91-00c9ce7bf357","Type":"ContainerStarted","Data":"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff"} Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.419485 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdc676b89-n65dj" event={"ID":"3f3d4806-6323-4d63-a9bc-0b6c29d95b45","Type":"ContainerDied","Data":"71057f3adadf91f281c77200eb5644a309387eb03e2918fa6ab414c03aa10d1b"} Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.419566 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdc676b89-n65dj" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.443843 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f99fb4f97-btmqc"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.473193 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f99fb4f97-btmqc"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.504634 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdc676b89-n65dj"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.517110 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cdc676b89-n65dj"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.527360 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-l8dd8"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.534447 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-l8dd8"] Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.641582 4959 scope.go:117] "RemoveContainer" containerID="02126e5b8910499451e6c11be45f2c25a4bcc6ae30932766a31b0e89ea106645" Jan 21 14:04:42 crc kubenswrapper[4959]: I0121 14:04:42.716445 4959 scope.go:117] "RemoveContainer" containerID="3d2a628cc215838558b7e209f43ee7c92f7379750aa178f0a7a44d784ffcf9d5" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.298577 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" path="/var/lib/kubelet/pods/3f3d4806-6323-4d63-a9bc-0b6c29d95b45/volumes" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.299481 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" path="/var/lib/kubelet/pods/4132df39-cbe0-451d-9eae-39ea25e6ce18/volumes" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.300141 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" path="/var/lib/kubelet/pods/6db8d66c-73bb-4ce5-81db-4254e41e78ad/volumes" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.382228 4959 scope.go:117] "RemoveContainer" containerID="321c6ce152359da16452cee6aa64e490fea9fa9526f2bbd8d2a795e543e33244" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.438381 4959 scope.go:117] "RemoveContainer" containerID="d76ca83357792d84a6bd63297e8dfe21ee488065dd8c53a700e90fd3b6e0e061" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.464755 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f6d5e362-311d-4ea4-97bf-c1550267ab81","Type":"ContainerStarted","Data":"7396cd98914199a45f6d6005fa377b5efdaf54f3f98405ae53797cd178e52a3c"} Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.512903 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"68bfc54e-7f7c-4360-ab91-00c9ce7bf357","Type":"ContainerStarted","Data":"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7"} Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.553796 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.707656133 podStartE2EDuration="14.553772831s" podCreationTimestamp="2026-01-21 14:04:29 +0000 UTC" firstStartedPulling="2026-01-21 14:04:30.560121548 +0000 UTC m=+3331.523152091" lastFinishedPulling="2026-01-21 14:04:41.406238246 +0000 UTC m=+3342.369268789" observedRunningTime="2026-01-21 14:04:43.543071599 +0000 UTC m=+3344.506102142" watchObservedRunningTime="2026-01-21 14:04:43.553772831 +0000 UTC m=+3344.516803374" Jan 21 14:04:43 crc kubenswrapper[4959]: I0121 14:04:43.710784 4959 scope.go:117] "RemoveContainer" containerID="9d33e0d02a55ed982cc97134b8b4cc3e790273d959300bc85ec4e4643de9c416" Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.384226 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77d7b8cf98-z8vcx" Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.494245 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.527526 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c5b8cfdcd-l422b"] Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.537999 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f6d5e362-311d-4ea4-97bf-c1550267ab81","Type":"ContainerStarted","Data":"7933034bbf70e7df3da7c62dd0d1fb1983e1b00c3aa1a274bdcb74015ecc4be3"} Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.539205 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.540818 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon-log" containerID="cri-o://437127343c566bce0b0697fe89282e76a91554fcea79726985bb7ff8aad47a9e" gracePeriod=30 Jan 21 14:04:44 crc kubenswrapper[4959]: I0121 14:04:44.541651 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" containerID="cri-o://2bd8458824c397c0f34cfe672440afdb7eade7f21aacf3b99c906c5b243ca97e" gracePeriod=30 Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.177980 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=9.177961199 podStartE2EDuration="9.177961199s" podCreationTimestamp="2026-01-21 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:04:44.597896338 +0000 UTC m=+3345.560926881" watchObservedRunningTime="2026-01-21 14:04:45.177961199 +0000 UTC m=+3346.140991742" Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.183715 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.184034 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-central-agent" containerID="cri-o://769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2" gracePeriod=30 Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.184079 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="proxy-httpd" containerID="cri-o://1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9" gracePeriod=30 Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.184142 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="sg-core" containerID="cri-o://2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889" gracePeriod=30 Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.184185 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-notification-agent" containerID="cri-o://103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65" gracePeriod=30 Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.555779 4959 generic.go:334] "Generic (PLEG): container finished" podID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerID="2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889" exitCode=2 Jan 21 14:04:45 crc kubenswrapper[4959]: I0121 14:04:45.556977 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerDied","Data":"2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889"} Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.352450 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fbc59fbb7-l8dd8" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: i/o timeout" Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.567814 4959 generic.go:334] "Generic (PLEG): container finished" podID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerID="1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9" exitCode=0 Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.567859 4959 generic.go:334] "Generic (PLEG): container finished" podID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerID="769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2" exitCode=0 Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.567906 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerDied","Data":"1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9"} Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.567971 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerDied","Data":"769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2"} Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.915448 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:46 crc kubenswrapper[4959]: I0121 14:04:46.965161 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:47 crc kubenswrapper[4959]: I0121 14:04:47.156288 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qz72x"] Jan 21 14:04:47 crc kubenswrapper[4959]: I0121 14:04:47.988699 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:37812->10.217.0.243:8443: read: connection reset by peer" Jan 21 14:04:48 crc kubenswrapper[4959]: I0121 14:04:48.585271 4959 generic.go:334] "Generic (PLEG): container finished" podID="ce473e12-c4b4-48e6-958f-f4416083667a" containerID="2bd8458824c397c0f34cfe672440afdb7eade7f21aacf3b99c906c5b243ca97e" exitCode=0 Jan 21 14:04:48 crc kubenswrapper[4959]: I0121 14:04:48.585359 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5b8cfdcd-l422b" event={"ID":"ce473e12-c4b4-48e6-958f-f4416083667a","Type":"ContainerDied","Data":"2bd8458824c397c0f34cfe672440afdb7eade7f21aacf3b99c906c5b243ca97e"} Jan 21 14:04:48 crc kubenswrapper[4959]: I0121 14:04:48.585730 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qz72x" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="registry-server" containerID="cri-o://05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde" gracePeriod=2 Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.042834 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.053775 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084701 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-config-data\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084793 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcxb\" (UniqueName: \"kubernetes.io/projected/2e0e478b-6faf-4540-ae97-30b2c6b019cd-kube-api-access-vzcxb\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084860 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-catalog-content\") pod \"19df0d94-f175-4339-ad82-e68d078070de\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084904 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-run-httpd\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084926 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-sg-core-conf-yaml\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084948 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-utilities\") pod \"19df0d94-f175-4339-ad82-e68d078070de\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.084984 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-scripts\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.085109 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25728\" (UniqueName: \"kubernetes.io/projected/19df0d94-f175-4339-ad82-e68d078070de-kube-api-access-25728\") pod \"19df0d94-f175-4339-ad82-e68d078070de\" (UID: \"19df0d94-f175-4339-ad82-e68d078070de\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.085152 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-combined-ca-bundle\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.085219 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-log-httpd\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.085245 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-ceilometer-tls-certs\") pod \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\" (UID: \"2e0e478b-6faf-4540-ae97-30b2c6b019cd\") " Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.093713 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.096079 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19df0d94-f175-4339-ad82-e68d078070de-kube-api-access-25728" (OuterVolumeSpecName: "kube-api-access-25728") pod "19df0d94-f175-4339-ad82-e68d078070de" (UID: "19df0d94-f175-4339-ad82-e68d078070de"). InnerVolumeSpecName "kube-api-access-25728". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.098682 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.104968 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-utilities" (OuterVolumeSpecName: "utilities") pod "19df0d94-f175-4339-ad82-e68d078070de" (UID: "19df0d94-f175-4339-ad82-e68d078070de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.105442 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-scripts" (OuterVolumeSpecName: "scripts") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.106327 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0e478b-6faf-4540-ae97-30b2c6b019cd-kube-api-access-vzcxb" (OuterVolumeSpecName: "kube-api-access-vzcxb") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "kube-api-access-vzcxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.126654 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.167397 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189234 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189267 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25728\" (UniqueName: \"kubernetes.io/projected/19df0d94-f175-4339-ad82-e68d078070de-kube-api-access-25728\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189281 4959 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189295 4959 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189306 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcxb\" (UniqueName: \"kubernetes.io/projected/2e0e478b-6faf-4540-ae97-30b2c6b019cd-kube-api-access-vzcxb\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189318 4959 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e0e478b-6faf-4540-ae97-30b2c6b019cd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189328 4959 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.189339 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.241274 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.257556 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-config-data" (OuterVolumeSpecName: "config-data") pod "2e0e478b-6faf-4540-ae97-30b2c6b019cd" (UID: "2e0e478b-6faf-4540-ae97-30b2c6b019cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.290736 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19df0d94-f175-4339-ad82-e68d078070de" (UID: "19df0d94-f175-4339-ad82-e68d078070de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.311754 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.311786 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e0e478b-6faf-4540-ae97-30b2c6b019cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.311801 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19df0d94-f175-4339-ad82-e68d078070de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.596897 4959 generic.go:334] "Generic (PLEG): container finished" podID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerID="103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65" exitCode=0 Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.597101 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerDied","Data":"103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65"} Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.597136 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e0e478b-6faf-4540-ae97-30b2c6b019cd","Type":"ContainerDied","Data":"f06e52c26a5bdc52661ec44a97f64bd61ccfa8f2af8ba23b6563b89fdc85cfb9"} Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.597154 4959 scope.go:117] "RemoveContainer" containerID="1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.597442 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.603641 4959 generic.go:334] "Generic (PLEG): container finished" podID="19df0d94-f175-4339-ad82-e68d078070de" containerID="05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde" exitCode=0 Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.603671 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerDied","Data":"05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde"} Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.603696 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qz72x" event={"ID":"19df0d94-f175-4339-ad82-e68d078070de","Type":"ContainerDied","Data":"5849388b9b85fd7bfa53d5be67d6de0b8124362199151969ac42af8230b23987"} Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.603770 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qz72x" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.630127 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.660955 4959 scope.go:117] "RemoveContainer" containerID="2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.665185 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.676815 4959 scope.go:117] "RemoveContainer" containerID="103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.676959 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qz72x"] Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.685866 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qz72x"] Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696286 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696714 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-central-agent" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696738 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-central-agent" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696755 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696765 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696777 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="extract-content" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696787 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="extract-content" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696807 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="init" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696815 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="init" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696826 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-notification-agent" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696834 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-notification-agent" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696848 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="proxy-httpd" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696855 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="proxy-httpd" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696869 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon-log" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696876 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon-log" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696898 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="registry-server" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696906 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="registry-server" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696919 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="extract-utilities" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696926 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="extract-utilities" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696938 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon-log" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696947 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon-log" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696955 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="dnsmasq-dns" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696963 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="dnsmasq-dns" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696974 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.696983 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.696998 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="sg-core" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.697005 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="sg-core" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700560 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="19df0d94-f175-4339-ad82-e68d078070de" containerName="registry-server" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700594 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon-log" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700609 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="4132df39-cbe0-451d-9eae-39ea25e6ce18" containerName="dnsmasq-dns" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700623 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-notification-agent" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700636 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="ceilometer-central-agent" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700647 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="sg-core" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700660 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon-log" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700673 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3d4806-6323-4d63-a9bc-0b6c29d95b45" containerName="horizon" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700689 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" containerName="proxy-httpd" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.700704 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db8d66c-73bb-4ce5-81db-4254e41e78ad" containerName="horizon" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.702378 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.706204 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.706293 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.706293 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.709066 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.729939 4959 scope.go:117] "RemoveContainer" containerID="769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.760747 4959 scope.go:117] "RemoveContainer" containerID="1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.761213 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9\": container with ID starting with 1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9 not found: ID does not exist" containerID="1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.761259 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9"} err="failed to get container status \"1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9\": rpc error: code = NotFound desc = could not find container \"1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9\": container with ID starting with 1698057761c46aba1c3ad0261cb2e6d9f6c75723f50b980d7b5852b7d1f049c9 not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.761302 4959 scope.go:117] "RemoveContainer" containerID="2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.761650 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889\": container with ID starting with 2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889 not found: ID does not exist" containerID="2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.761692 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889"} err="failed to get container status \"2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889\": rpc error: code = NotFound desc = could not find container \"2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889\": container with ID starting with 2a6ffc398e004c33a79f4d483883562eaa662c725d5d1d62e2b9cdbe7ef9f889 not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.761741 4959 scope.go:117] "RemoveContainer" containerID="103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.762027 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65\": container with ID starting with 103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65 not found: ID does not exist" containerID="103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.762048 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65"} err="failed to get container status \"103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65\": rpc error: code = NotFound desc = could not find container \"103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65\": container with ID starting with 103f8079a5bd80512132813a1245737669310d6877f097aa6af1a26c0f8d1a65 not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.762061 4959 scope.go:117] "RemoveContainer" containerID="769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.766619 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2\": container with ID starting with 769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2 not found: ID does not exist" containerID="769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.766687 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2"} err="failed to get container status \"769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2\": rpc error: code = NotFound desc = could not find container \"769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2\": container with ID starting with 769ac8e5132cb6cefd37d359928df248c97e20e6eb82c678979d16f47ed4e8a2 not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.766715 4959 scope.go:117] "RemoveContainer" containerID="05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.770268 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.783913 4959 scope.go:117] "RemoveContainer" containerID="bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.808846 4959 scope.go:117] "RemoveContainer" containerID="b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.827748 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.827831 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19a25822-a400-4324-80d6-af9aa79d33a0-run-httpd\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.827881 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-scripts\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.827911 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.828040 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.828426 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-config-data\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.828544 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19a25822-a400-4324-80d6-af9aa79d33a0-log-httpd\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.828738 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6vt\" (UniqueName: \"kubernetes.io/projected/19a25822-a400-4324-80d6-af9aa79d33a0-kube-api-access-mq6vt\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.866013 4959 scope.go:117] "RemoveContainer" containerID="05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.866602 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde\": container with ID starting with 05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde not found: ID does not exist" containerID="05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.866643 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde"} err="failed to get container status \"05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde\": rpc error: code = NotFound desc = could not find container \"05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde\": container with ID starting with 05867bf5516b0ce97f17ab3ac171d9a648888a1b83a9f149aeb552465caa9bde not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.866667 4959 scope.go:117] "RemoveContainer" containerID="bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.867065 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e\": container with ID starting with bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e not found: ID does not exist" containerID="bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.867137 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e"} err="failed to get container status \"bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e\": rpc error: code = NotFound desc = could not find container \"bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e\": container with ID starting with bbbefacf202409e34891dd2ad6e3e3be0abdded9323f46e7fdf6cf94d99c605e not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.867174 4959 scope.go:117] "RemoveContainer" containerID="b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be" Jan 21 14:04:49 crc kubenswrapper[4959]: E0121 14:04:49.867598 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be\": container with ID starting with b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be not found: ID does not exist" containerID="b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.867639 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be"} err="failed to get container status \"b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be\": rpc error: code = NotFound desc = could not find container \"b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be\": container with ID starting with b8c6daad5b4ec03a320a7140cd23dee2b906a5db96525843b93f0a9bc5f6d5be not found: ID does not exist" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.930641 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19a25822-a400-4324-80d6-af9aa79d33a0-log-httpd\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.930768 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6vt\" (UniqueName: \"kubernetes.io/projected/19a25822-a400-4324-80d6-af9aa79d33a0-kube-api-access-mq6vt\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.930855 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.930980 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19a25822-a400-4324-80d6-af9aa79d33a0-run-httpd\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.931028 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-scripts\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.931109 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.931143 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.931288 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-config-data\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.931292 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19a25822-a400-4324-80d6-af9aa79d33a0-log-httpd\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.931509 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19a25822-a400-4324-80d6-af9aa79d33a0-run-httpd\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.944874 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-scripts\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.945663 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-config-data\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.945975 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.946516 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.948085 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a25822-a400-4324-80d6-af9aa79d33a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:49 crc kubenswrapper[4959]: I0121 14:04:49.950580 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6vt\" (UniqueName: \"kubernetes.io/projected/19a25822-a400-4324-80d6-af9aa79d33a0-kube-api-access-mq6vt\") pod \"ceilometer-0\" (UID: \"19a25822-a400-4324-80d6-af9aa79d33a0\") " pod="openstack/ceilometer-0" Jan 21 14:04:50 crc kubenswrapper[4959]: I0121 14:04:50.031926 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:04:50 crc kubenswrapper[4959]: I0121 14:04:50.489699 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:04:50 crc kubenswrapper[4959]: I0121 14:04:50.627038 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19a25822-a400-4324-80d6-af9aa79d33a0","Type":"ContainerStarted","Data":"9135edc83c6ba30f38fbd85840a83e842eb724d9752861993fd4928bd4762a42"} Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.297501 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19df0d94-f175-4339-ad82-e68d078070de" path="/var/lib/kubelet/pods/19df0d94-f175-4339-ad82-e68d078070de/volumes" Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.300826 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0e478b-6faf-4540-ae97-30b2c6b019cd" path="/var/lib/kubelet/pods/2e0e478b-6faf-4540-ae97-30b2c6b019cd/volumes" Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.424753 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.467276 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.648389 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="manila-scheduler" containerID="cri-o://ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2" gracePeriod=30 Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.648480 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="probe" containerID="cri-o://2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8" gracePeriod=30 Jan 21 14:04:51 crc kubenswrapper[4959]: I0121 14:04:51.648711 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19a25822-a400-4324-80d6-af9aa79d33a0","Type":"ContainerStarted","Data":"de573a1d8b2efe8b2181844ad9543db55379f38a1a17adb684b9979dff4269b3"} Jan 21 14:04:52 crc kubenswrapper[4959]: I0121 14:04:52.801643 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19a25822-a400-4324-80d6-af9aa79d33a0","Type":"ContainerStarted","Data":"4cdc879dec8cbee261d10739cce27e7e1db6fcca6c6135fd091c132063fc200c"} Jan 21 14:04:52 crc kubenswrapper[4959]: I0121 14:04:52.832891 4959 generic.go:334] "Generic (PLEG): container finished" podID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerID="2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8" exitCode=0 Jan 21 14:04:52 crc kubenswrapper[4959]: I0121 14:04:52.832948 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3101b127-ec2a-4baf-94fa-799b831a5aed","Type":"ContainerDied","Data":"2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8"} Jan 21 14:04:53 crc kubenswrapper[4959]: I0121 14:04:53.843713 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19a25822-a400-4324-80d6-af9aa79d33a0","Type":"ContainerStarted","Data":"107a71587624823123d676b1fb6e03c1a019092d274536366da857986420c75b"} Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.759496 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.864048 4959 generic.go:334] "Generic (PLEG): container finished" podID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerID="ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2" exitCode=0 Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.864128 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3101b127-ec2a-4baf-94fa-799b831a5aed","Type":"ContainerDied","Data":"ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2"} Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.864159 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3101b127-ec2a-4baf-94fa-799b831a5aed","Type":"ContainerDied","Data":"0970b491d26d9a3db5635315453359cafc8d0a0a2933918f103fae1ff2587a3e"} Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.864178 4959 scope.go:117] "RemoveContainer" containerID="2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.864356 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.872794 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19a25822-a400-4324-80d6-af9aa79d33a0","Type":"ContainerStarted","Data":"991d51a93a50901d5ab111d05a0d73e503a6c7821d6cfda2046ac55a35b66380"} Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.874071 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.888196 4959 scope.go:117] "RemoveContainer" containerID="ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.917335 4959 scope.go:117] "RemoveContainer" containerID="2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8" Jan 21 14:04:55 crc kubenswrapper[4959]: E0121 14:04:55.918031 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8\": container with ID starting with 2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8 not found: ID does not exist" containerID="2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.918083 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8"} err="failed to get container status \"2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8\": rpc error: code = NotFound desc = could not find container \"2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8\": container with ID starting with 2c92ed1fce1dc3496def6669311d234126fe8e0fff1814adf5d46932144a5cb8 not found: ID does not exist" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.918228 4959 scope.go:117] "RemoveContainer" containerID="ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2" Jan 21 14:04:55 crc kubenswrapper[4959]: E0121 14:04:55.918651 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2\": container with ID starting with ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2 not found: ID does not exist" containerID="ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.918682 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2"} err="failed to get container status \"ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2\": rpc error: code = NotFound desc = could not find container \"ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2\": container with ID starting with ef61c6d2a036abd68317917e345a6dfcc30cbedb681e811b3e5e0faaafe228b2 not found: ID does not exist" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.940505 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data-custom\") pod \"3101b127-ec2a-4baf-94fa-799b831a5aed\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.940570 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-scripts\") pod \"3101b127-ec2a-4baf-94fa-799b831a5aed\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.940691 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-combined-ca-bundle\") pod \"3101b127-ec2a-4baf-94fa-799b831a5aed\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.940832 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data\") pod \"3101b127-ec2a-4baf-94fa-799b831a5aed\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.940882 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zqk8\" (UniqueName: \"kubernetes.io/projected/3101b127-ec2a-4baf-94fa-799b831a5aed-kube-api-access-7zqk8\") pod \"3101b127-ec2a-4baf-94fa-799b831a5aed\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.940958 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3101b127-ec2a-4baf-94fa-799b831a5aed-etc-machine-id\") pod \"3101b127-ec2a-4baf-94fa-799b831a5aed\" (UID: \"3101b127-ec2a-4baf-94fa-799b831a5aed\") " Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.941450 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3101b127-ec2a-4baf-94fa-799b831a5aed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3101b127-ec2a-4baf-94fa-799b831a5aed" (UID: "3101b127-ec2a-4baf-94fa-799b831a5aed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.946357 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3101b127-ec2a-4baf-94fa-799b831a5aed-kube-api-access-7zqk8" (OuterVolumeSpecName: "kube-api-access-7zqk8") pod "3101b127-ec2a-4baf-94fa-799b831a5aed" (UID: "3101b127-ec2a-4baf-94fa-799b831a5aed"). InnerVolumeSpecName "kube-api-access-7zqk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.946869 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3101b127-ec2a-4baf-94fa-799b831a5aed" (UID: "3101b127-ec2a-4baf-94fa-799b831a5aed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.947239 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-scripts" (OuterVolumeSpecName: "scripts") pod "3101b127-ec2a-4baf-94fa-799b831a5aed" (UID: "3101b127-ec2a-4baf-94fa-799b831a5aed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:55 crc kubenswrapper[4959]: I0121 14:04:55.998604 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3101b127-ec2a-4baf-94fa-799b831a5aed" (UID: "3101b127-ec2a-4baf-94fa-799b831a5aed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.044412 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zqk8\" (UniqueName: \"kubernetes.io/projected/3101b127-ec2a-4baf-94fa-799b831a5aed-kube-api-access-7zqk8\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.044447 4959 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3101b127-ec2a-4baf-94fa-799b831a5aed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.044459 4959 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.044469 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.044480 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.046188 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data" (OuterVolumeSpecName: "config-data") pod "3101b127-ec2a-4baf-94fa-799b831a5aed" (UID: "3101b127-ec2a-4baf-94fa-799b831a5aed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.146444 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3101b127-ec2a-4baf-94fa-799b831a5aed-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.193277 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.633220284 podStartE2EDuration="7.19325563s" podCreationTimestamp="2026-01-21 14:04:49 +0000 UTC" firstStartedPulling="2026-01-21 14:04:50.496253474 +0000 UTC m=+3351.459284017" lastFinishedPulling="2026-01-21 14:04:55.05628878 +0000 UTC m=+3356.019319363" observedRunningTime="2026-01-21 14:04:55.895430131 +0000 UTC m=+3356.858460684" watchObservedRunningTime="2026-01-21 14:04:56.19325563 +0000 UTC m=+3357.156286173" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.200063 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.215776 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.224512 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:56 crc kubenswrapper[4959]: E0121 14:04:56.224897 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="probe" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.224915 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="probe" Jan 21 14:04:56 crc kubenswrapper[4959]: E0121 14:04:56.224931 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="manila-scheduler" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.224938 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="manila-scheduler" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.225131 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="manila-scheduler" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.225157 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" containerName="probe" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.227210 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.234359 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.235552 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.250789 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-config-data\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.250862 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.250954 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-scripts\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.251038 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.251118 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.251174 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dztt7\" (UniqueName: \"kubernetes.io/projected/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-kube-api-access-dztt7\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.352180 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-scripts\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.352271 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.352323 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.352351 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dztt7\" (UniqueName: \"kubernetes.io/projected/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-kube-api-access-dztt7\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.352403 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-config-data\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.352432 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.354164 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.358526 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.359429 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.369845 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-config-data\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.372912 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dztt7\" (UniqueName: \"kubernetes.io/projected/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-kube-api-access-dztt7\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.373946 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9ec2b6-d7fe-407d-896d-df14df1b2c66-scripts\") pod \"manila-scheduler-0\" (UID: \"ce9ec2b6-d7fe-407d-896d-df14df1b2c66\") " pod="openstack/manila-scheduler-0" Jan 21 14:04:56 crc kubenswrapper[4959]: I0121 14:04:56.549432 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 14:04:57 crc kubenswrapper[4959]: I0121 14:04:57.031212 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 14:04:57 crc kubenswrapper[4959]: I0121 14:04:57.297481 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3101b127-ec2a-4baf-94fa-799b831a5aed" path="/var/lib/kubelet/pods/3101b127-ec2a-4baf-94fa-799b831a5aed/volumes" Jan 21 14:04:57 crc kubenswrapper[4959]: I0121 14:04:57.641736 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 21 14:04:57 crc kubenswrapper[4959]: I0121 14:04:57.898224 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ce9ec2b6-d7fe-407d-896d-df14df1b2c66","Type":"ContainerStarted","Data":"42fa320dceb2546802713a8393f7a8a91f06cd9ef7d62c1c4bd28974f9202fed"} Jan 21 14:04:57 crc kubenswrapper[4959]: I0121 14:04:57.898300 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ce9ec2b6-d7fe-407d-896d-df14df1b2c66","Type":"ContainerStarted","Data":"0ac6edd0ae31d4093daca6957323e597c7b262fdbdbe5299b2ac7ed6f262cec9"} Jan 21 14:04:58 crc kubenswrapper[4959]: I0121 14:04:58.499847 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 21 14:04:58 crc kubenswrapper[4959]: I0121 14:04:58.907415 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ce9ec2b6-d7fe-407d-896d-df14df1b2c66","Type":"ContainerStarted","Data":"41e9d3bce413aeb35bbbf992610161402d3964948752e341dc9b519749e4fcd5"} Jan 21 14:04:58 crc kubenswrapper[4959]: I0121 14:04:58.933592 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.9335739910000003 podStartE2EDuration="2.933573991s" podCreationTimestamp="2026-01-21 14:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:04:58.926382702 +0000 UTC m=+3359.889413255" watchObservedRunningTime="2026-01-21 14:04:58.933573991 +0000 UTC m=+3359.896604524" Jan 21 14:05:01 crc kubenswrapper[4959]: I0121 14:05:01.534678 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 14:05:01 crc kubenswrapper[4959]: I0121 14:05:01.603998 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:05:01 crc kubenswrapper[4959]: I0121 14:05:01.935746 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="manila-share" containerID="cri-o://ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff" gracePeriod=30 Jan 21 14:05:01 crc kubenswrapper[4959]: I0121 14:05:01.935821 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="probe" containerID="cri-o://2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7" gracePeriod=30 Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.803059 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822327 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-ceph\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822427 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7flj7\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-kube-api-access-7flj7\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822504 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-scripts\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822561 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data-custom\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822588 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822681 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-var-lib-manila\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822848 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-combined-ca-bundle\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.822918 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-etc-machine-id\") pod \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\" (UID: \"68bfc54e-7f7c-4360-ab91-00c9ce7bf357\") " Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.823695 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.824958 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.834592 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-kube-api-access-7flj7" (OuterVolumeSpecName: "kube-api-access-7flj7") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "kube-api-access-7flj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.834717 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-ceph" (OuterVolumeSpecName: "ceph") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.835222 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-scripts" (OuterVolumeSpecName: "scripts") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.836416 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.926284 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927350 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927422 4959 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927437 4959 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927471 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927482 4959 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927494 4959 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.927504 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7flj7\" (UniqueName: \"kubernetes.io/projected/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-kube-api-access-7flj7\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964236 4959 generic.go:334] "Generic (PLEG): container finished" podID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerID="2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7" exitCode=0 Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964267 4959 generic.go:334] "Generic (PLEG): container finished" podID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerID="ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff" exitCode=1 Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964289 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"68bfc54e-7f7c-4360-ab91-00c9ce7bf357","Type":"ContainerDied","Data":"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7"} Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964318 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"68bfc54e-7f7c-4360-ab91-00c9ce7bf357","Type":"ContainerDied","Data":"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff"} Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964328 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"68bfc54e-7f7c-4360-ab91-00c9ce7bf357","Type":"ContainerDied","Data":"516789d9dc98565ab7112dbef2f801cd7c84e37654a9d7b40835e5df6a484c64"} Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964333 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.964345 4959 scope.go:117] "RemoveContainer" containerID="2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7" Jan 21 14:05:02 crc kubenswrapper[4959]: I0121 14:05:02.981084 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data" (OuterVolumeSpecName: "config-data") pod "68bfc54e-7f7c-4360-ab91-00c9ce7bf357" (UID: "68bfc54e-7f7c-4360-ab91-00c9ce7bf357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.003411 4959 scope.go:117] "RemoveContainer" containerID="ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.024024 4959 scope.go:117] "RemoveContainer" containerID="2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7" Jan 21 14:05:03 crc kubenswrapper[4959]: E0121 14:05:03.024470 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7\": container with ID starting with 2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7 not found: ID does not exist" containerID="2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.024505 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7"} err="failed to get container status \"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7\": rpc error: code = NotFound desc = could not find container \"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7\": container with ID starting with 2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7 not found: ID does not exist" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.024527 4959 scope.go:117] "RemoveContainer" containerID="ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff" Jan 21 14:05:03 crc kubenswrapper[4959]: E0121 14:05:03.024848 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff\": container with ID starting with ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff not found: ID does not exist" containerID="ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.024888 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff"} err="failed to get container status \"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff\": rpc error: code = NotFound desc = could not find container \"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff\": container with ID starting with ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff not found: ID does not exist" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.024915 4959 scope.go:117] "RemoveContainer" containerID="2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.025181 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7"} err="failed to get container status \"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7\": rpc error: code = NotFound desc = could not find container \"2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7\": container with ID starting with 2c13768724cb5c31105cb836983e27a9f51ea0160b68ea3b7232f5d4692a20f7 not found: ID does not exist" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.025235 4959 scope.go:117] "RemoveContainer" containerID="ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.025420 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff"} err="failed to get container status \"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff\": rpc error: code = NotFound desc = could not find container \"ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff\": container with ID starting with ec8d0b6d4f114740aba83989cad4da72791381390ca5fa52075ed9dbcfa92eff not found: ID does not exist" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.029384 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68bfc54e-7f7c-4360-ab91-00c9ce7bf357-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.302371 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.312487 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.323821 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:05:03 crc kubenswrapper[4959]: E0121 14:05:03.324229 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="manila-share" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.324249 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="manila-share" Jan 21 14:05:03 crc kubenswrapper[4959]: E0121 14:05:03.324277 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="probe" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.324283 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="probe" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.324459 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="manila-share" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.324484 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" containerName="probe" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.335774 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.338544 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.371452 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440088 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440184 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-scripts\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440255 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62r62\" (UniqueName: \"kubernetes.io/projected/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-kube-api-access-62r62\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440287 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-config-data\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440340 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440413 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-ceph\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440432 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.440451 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542451 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62r62\" (UniqueName: \"kubernetes.io/projected/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-kube-api-access-62r62\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542517 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-config-data\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542597 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542668 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-ceph\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542694 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542720 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542753 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.542772 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-scripts\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.543566 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.543658 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.546701 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.547073 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-scripts\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.547321 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.547960 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-config-data\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.548137 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-ceph\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.561635 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62r62\" (UniqueName: \"kubernetes.io/projected/10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b-kube-api-access-62r62\") pod \"manila-share-share1-0\" (UID: \"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b\") " pod="openstack/manila-share-share1-0" Jan 21 14:05:03 crc kubenswrapper[4959]: I0121 14:05:03.661276 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 14:05:04 crc kubenswrapper[4959]: I0121 14:05:04.193768 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 14:05:05 crc kubenswrapper[4959]: I0121 14:05:05.006738 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b","Type":"ContainerStarted","Data":"198512bb7fbcf8540a4dc51cbac3399af87a027d84dc41161ce953926e646ff3"} Jan 21 14:05:05 crc kubenswrapper[4959]: I0121 14:05:05.007336 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b","Type":"ContainerStarted","Data":"6ee52be4e3dd76a5458d9f561a25132e00f15c7b3aae105fdbd4c014105cba6b"} Jan 21 14:05:05 crc kubenswrapper[4959]: I0121 14:05:05.296526 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68bfc54e-7f7c-4360-ab91-00c9ce7bf357" path="/var/lib/kubelet/pods/68bfc54e-7f7c-4360-ab91-00c9ce7bf357/volumes" Jan 21 14:05:06 crc kubenswrapper[4959]: I0121 14:05:06.018912 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b","Type":"ContainerStarted","Data":"bae07bde97268198b430a543d1a15895ac364850a523da6f7fd7216c0fddf2ef"} Jan 21 14:05:06 crc kubenswrapper[4959]: I0121 14:05:06.045082 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.045060953 podStartE2EDuration="3.045060953s" podCreationTimestamp="2026-01-21 14:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:05:06.04263788 +0000 UTC m=+3367.005668423" watchObservedRunningTime="2026-01-21 14:05:06.045060953 +0000 UTC m=+3367.008091496" Jan 21 14:05:06 crc kubenswrapper[4959]: I0121 14:05:06.550812 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 14:05:07 crc kubenswrapper[4959]: I0121 14:05:07.638280 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c5b8cfdcd-l422b" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Jan 21 14:05:07 crc kubenswrapper[4959]: I0121 14:05:07.638700 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:05:13 crc kubenswrapper[4959]: I0121 14:05:13.661783 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.150401 4959 generic.go:334] "Generic (PLEG): container finished" podID="ce473e12-c4b4-48e6-958f-f4416083667a" containerID="437127343c566bce0b0697fe89282e76a91554fcea79726985bb7ff8aad47a9e" exitCode=137 Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.150474 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5b8cfdcd-l422b" event={"ID":"ce473e12-c4b4-48e6-958f-f4416083667a","Type":"ContainerDied","Data":"437127343c566bce0b0697fe89282e76a91554fcea79726985bb7ff8aad47a9e"} Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.502065 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686199 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-scripts\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686272 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9wzl\" (UniqueName: \"kubernetes.io/projected/ce473e12-c4b4-48e6-958f-f4416083667a-kube-api-access-g9wzl\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686308 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce473e12-c4b4-48e6-958f-f4416083667a-logs\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686471 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-tls-certs\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686495 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-secret-key\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686530 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-combined-ca-bundle\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.686616 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-config-data\") pod \"ce473e12-c4b4-48e6-958f-f4416083667a\" (UID: \"ce473e12-c4b4-48e6-958f-f4416083667a\") " Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.687333 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce473e12-c4b4-48e6-958f-f4416083667a-logs" (OuterVolumeSpecName: "logs") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.693358 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.698529 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce473e12-c4b4-48e6-958f-f4416083667a-kube-api-access-g9wzl" (OuterVolumeSpecName: "kube-api-access-g9wzl") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "kube-api-access-g9wzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.714867 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-scripts" (OuterVolumeSpecName: "scripts") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.719843 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-config-data" (OuterVolumeSpecName: "config-data") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.722385 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.747777 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ce473e12-c4b4-48e6-958f-f4416083667a" (UID: "ce473e12-c4b4-48e6-958f-f4416083667a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.789890 4959 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.790137 4959 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.790198 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.790250 4959 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce473e12-c4b4-48e6-958f-f4416083667a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.790326 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9wzl\" (UniqueName: \"kubernetes.io/projected/ce473e12-c4b4-48e6-958f-f4416083667a-kube-api-access-g9wzl\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.790407 4959 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce473e12-c4b4-48e6-958f-f4416083667a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:15 crc kubenswrapper[4959]: I0121 14:05:15.790482 4959 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce473e12-c4b4-48e6-958f-f4416083667a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:05:16 crc kubenswrapper[4959]: I0121 14:05:16.165473 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5b8cfdcd-l422b" event={"ID":"ce473e12-c4b4-48e6-958f-f4416083667a","Type":"ContainerDied","Data":"e7f993d56d291385c7e0df6ca4df781bbea5173ef08b4a86d7a84fe006140676"} Jan 21 14:05:16 crc kubenswrapper[4959]: I0121 14:05:16.165537 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c5b8cfdcd-l422b" Jan 21 14:05:16 crc kubenswrapper[4959]: I0121 14:05:16.165825 4959 scope.go:117] "RemoveContainer" containerID="2bd8458824c397c0f34cfe672440afdb7eade7f21aacf3b99c906c5b243ca97e" Jan 21 14:05:16 crc kubenswrapper[4959]: I0121 14:05:16.222970 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c5b8cfdcd-l422b"] Jan 21 14:05:16 crc kubenswrapper[4959]: I0121 14:05:16.236351 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c5b8cfdcd-l422b"] Jan 21 14:05:16 crc kubenswrapper[4959]: I0121 14:05:16.359549 4959 scope.go:117] "RemoveContainer" containerID="437127343c566bce0b0697fe89282e76a91554fcea79726985bb7ff8aad47a9e" Jan 21 14:05:17 crc kubenswrapper[4959]: I0121 14:05:17.296544 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" path="/var/lib/kubelet/pods/ce473e12-c4b4-48e6-958f-f4416083667a/volumes" Jan 21 14:05:18 crc kubenswrapper[4959]: I0121 14:05:18.065808 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 14:05:20 crc kubenswrapper[4959]: I0121 14:05:20.045381 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:05:21 crc kubenswrapper[4959]: I0121 14:05:21.379526 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:05:21 crc kubenswrapper[4959]: I0121 14:05:21.379603 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:05:25 crc kubenswrapper[4959]: I0121 14:05:25.243978 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 14:05:51 crc kubenswrapper[4959]: I0121 14:05:51.379375 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:05:51 crc kubenswrapper[4959]: I0121 14:05:51.379985 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.383248 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.383794 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.383837 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.384528 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c5b09cfc927c1cbd8d659e888e39bb3beea0a8126bf7572b394973bb1f27a34"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.384602 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://5c5b09cfc927c1cbd8d659e888e39bb3beea0a8126bf7572b394973bb1f27a34" gracePeriod=600 Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.921944 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="5c5b09cfc927c1cbd8d659e888e39bb3beea0a8126bf7572b394973bb1f27a34" exitCode=0 Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.922032 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"5c5b09cfc927c1cbd8d659e888e39bb3beea0a8126bf7572b394973bb1f27a34"} Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.922452 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4"} Jan 21 14:06:21 crc kubenswrapper[4959]: I0121 14:06:21.922481 4959 scope.go:117] "RemoveContainer" containerID="b1e73ffa279556b68c7b4bba7532da1833ec17adf47e7a5a61b20926a302fb1e" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.829722 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 14:06:28 crc kubenswrapper[4959]: E0121 14:06:28.831913 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon-log" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.832009 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon-log" Jan 21 14:06:28 crc kubenswrapper[4959]: E0121 14:06:28.832474 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.832558 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.832777 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.832852 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce473e12-c4b4-48e6-958f-f4416083667a" containerName="horizon-log" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.833644 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.836652 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.836861 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.836985 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.836652 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-w4l25" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.840509 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.960724 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl76l\" (UniqueName: \"kubernetes.io/projected/b61e1395-82fb-4c39-907d-d5aa160aa10f-kube-api-access-rl76l\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.960869 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.960912 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.961011 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.961064 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.961111 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.962631 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.962706 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:28 crc kubenswrapper[4959]: I0121 14:06:28.962775 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064422 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064493 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064531 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064569 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064642 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064674 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl76l\" (UniqueName: \"kubernetes.io/projected/b61e1395-82fb-4c39-907d-d5aa160aa10f-kube-api-access-rl76l\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064713 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064756 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.064853 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.066970 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.067547 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.067680 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.067905 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.068222 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.074514 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.077816 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.087234 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.088091 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl76l\" (UniqueName: \"kubernetes.io/projected/b61e1395-82fb-4c39-907d-d5aa160aa10f-kube-api-access-rl76l\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.108181 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.162435 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.619135 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 14:06:29 crc kubenswrapper[4959]: I0121 14:06:29.996690 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b61e1395-82fb-4c39-907d-d5aa160aa10f","Type":"ContainerStarted","Data":"5770ee1c1e7d7a36c349afa3526af060b3d953098000cd0773b1adddbb36c752"} Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.224372 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rgv98"] Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.226773 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.246211 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgv98"] Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.388487 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb7p\" (UniqueName: \"kubernetes.io/projected/6c173e83-4170-4fdd-a499-e05e39cadc5e-kube-api-access-scb7p\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.388572 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-utilities\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.388608 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-catalog-content\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.491680 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb7p\" (UniqueName: \"kubernetes.io/projected/6c173e83-4170-4fdd-a499-e05e39cadc5e-kube-api-access-scb7p\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.491760 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-utilities\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.491794 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-catalog-content\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.492727 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-catalog-content\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.493073 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-utilities\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.520285 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb7p\" (UniqueName: \"kubernetes.io/projected/6c173e83-4170-4fdd-a499-e05e39cadc5e-kube-api-access-scb7p\") pod \"certified-operators-rgv98\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:38 crc kubenswrapper[4959]: I0121 14:06:38.553857 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:06:40 crc kubenswrapper[4959]: I0121 14:06:40.064717 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgv98"] Jan 21 14:06:40 crc kubenswrapper[4959]: I0121 14:06:40.115905 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgv98" event={"ID":"6c173e83-4170-4fdd-a499-e05e39cadc5e","Type":"ContainerStarted","Data":"ed9962f08fd5df3bd775f7620143fbe237bc5e6e8124103917d9832037526cd2"} Jan 21 14:06:42 crc kubenswrapper[4959]: I0121 14:06:42.138357 4959 generic.go:334] "Generic (PLEG): container finished" podID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerID="573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042" exitCode=0 Jan 21 14:06:42 crc kubenswrapper[4959]: I0121 14:06:42.138414 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgv98" event={"ID":"6c173e83-4170-4fdd-a499-e05e39cadc5e","Type":"ContainerDied","Data":"573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042"} Jan 21 14:06:45 crc kubenswrapper[4959]: I0121 14:06:45.985936 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j7w7p"] Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.027364 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7w7p"] Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.027527 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.169476 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-catalog-content\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.169597 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-utilities\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.169646 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjqh\" (UniqueName: \"kubernetes.io/projected/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-kube-api-access-5kjqh\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.271665 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-catalog-content\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.271781 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-utilities\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.271830 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjqh\" (UniqueName: \"kubernetes.io/projected/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-kube-api-access-5kjqh\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.272353 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-utilities\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.272379 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-catalog-content\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.293165 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjqh\" (UniqueName: \"kubernetes.io/projected/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-kube-api-access-5kjqh\") pod \"redhat-marketplace-j7w7p\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:06:46 crc kubenswrapper[4959]: I0121 14:06:46.374976 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:10 crc kubenswrapper[4959]: E0121 14:07:10.272933 4959 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 21 14:07:10 crc kubenswrapper[4959]: E0121 14:07:10.273763 4959 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl76l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b61e1395-82fb-4c39-907d-d5aa160aa10f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:07:10 crc kubenswrapper[4959]: E0121 14:07:10.274962 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b61e1395-82fb-4c39-907d-d5aa160aa10f" Jan 21 14:07:10 crc kubenswrapper[4959]: I0121 14:07:10.701234 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7w7p"] Jan 21 14:07:11 crc kubenswrapper[4959]: I0121 14:07:11.024645 4959 generic.go:334] "Generic (PLEG): container finished" podID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerID="156631ac6ac56ecd2a51ec3bf1abac9e826a72ef82a88cfbfa39df725e892f79" exitCode=0 Jan 21 14:07:11 crc kubenswrapper[4959]: I0121 14:07:11.024739 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerDied","Data":"156631ac6ac56ecd2a51ec3bf1abac9e826a72ef82a88cfbfa39df725e892f79"} Jan 21 14:07:11 crc kubenswrapper[4959]: I0121 14:07:11.025180 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerStarted","Data":"289f333e250c9fd3e7c2e452a4153f4cb8cc39912cf34d039e65c5193bc5655f"} Jan 21 14:07:11 crc kubenswrapper[4959]: E0121 14:07:11.027305 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b61e1395-82fb-4c39-907d-d5aa160aa10f" Jan 21 14:07:13 crc kubenswrapper[4959]: I0121 14:07:13.051786 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerStarted","Data":"bc5cf0d7450be9eb44410a989fb9be3b68d72082ea7658cc161aabb62f3ff89c"} Jan 21 14:07:13 crc kubenswrapper[4959]: I0121 14:07:13.054234 4959 generic.go:334] "Generic (PLEG): container finished" podID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerID="ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60" exitCode=0 Jan 21 14:07:13 crc kubenswrapper[4959]: I0121 14:07:13.054274 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgv98" event={"ID":"6c173e83-4170-4fdd-a499-e05e39cadc5e","Type":"ContainerDied","Data":"ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60"} Jan 21 14:07:14 crc kubenswrapper[4959]: I0121 14:07:14.067328 4959 generic.go:334] "Generic (PLEG): container finished" podID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerID="bc5cf0d7450be9eb44410a989fb9be3b68d72082ea7658cc161aabb62f3ff89c" exitCode=0 Jan 21 14:07:14 crc kubenswrapper[4959]: I0121 14:07:14.067405 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerDied","Data":"bc5cf0d7450be9eb44410a989fb9be3b68d72082ea7658cc161aabb62f3ff89c"} Jan 21 14:07:14 crc kubenswrapper[4959]: I0121 14:07:14.071559 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgv98" event={"ID":"6c173e83-4170-4fdd-a499-e05e39cadc5e","Type":"ContainerStarted","Data":"885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84"} Jan 21 14:07:14 crc kubenswrapper[4959]: I0121 14:07:14.115330 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rgv98" podStartSLOduration=32.565452384 podStartE2EDuration="36.115305134s" podCreationTimestamp="2026-01-21 14:06:38 +0000 UTC" firstStartedPulling="2026-01-21 14:07:10.2097554 +0000 UTC m=+3491.172785943" lastFinishedPulling="2026-01-21 14:07:13.75960813 +0000 UTC m=+3494.722638693" observedRunningTime="2026-01-21 14:07:14.111300909 +0000 UTC m=+3495.074331452" watchObservedRunningTime="2026-01-21 14:07:14.115305134 +0000 UTC m=+3495.078335687" Jan 21 14:07:15 crc kubenswrapper[4959]: I0121 14:07:15.098763 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerStarted","Data":"10f3ce004f4a25f5fa5b265676a55a710d791ef512d1d812d674116150fbde40"} Jan 21 14:07:15 crc kubenswrapper[4959]: I0121 14:07:15.132207 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j7w7p" podStartSLOduration=26.570240831 podStartE2EDuration="30.132171567s" podCreationTimestamp="2026-01-21 14:06:45 +0000 UTC" firstStartedPulling="2026-01-21 14:07:11.02753877 +0000 UTC m=+3491.990569313" lastFinishedPulling="2026-01-21 14:07:14.589469506 +0000 UTC m=+3495.552500049" observedRunningTime="2026-01-21 14:07:15.125560823 +0000 UTC m=+3496.088591366" watchObservedRunningTime="2026-01-21 14:07:15.132171567 +0000 UTC m=+3496.095202110" Jan 21 14:07:16 crc kubenswrapper[4959]: I0121 14:07:16.379290 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:16 crc kubenswrapper[4959]: I0121 14:07:16.379647 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:17 crc kubenswrapper[4959]: I0121 14:07:17.431964 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-j7w7p" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="registry-server" probeResult="failure" output=< Jan 21 14:07:17 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 14:07:17 crc kubenswrapper[4959]: > Jan 21 14:07:18 crc kubenswrapper[4959]: I0121 14:07:18.554087 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:07:18 crc kubenswrapper[4959]: I0121 14:07:18.554623 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:07:18 crc kubenswrapper[4959]: I0121 14:07:18.606471 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:07:19 crc kubenswrapper[4959]: I0121 14:07:19.192845 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:07:19 crc kubenswrapper[4959]: I0121 14:07:19.316705 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgv98"] Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.166656 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rgv98" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="registry-server" containerID="cri-o://885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84" gracePeriod=2 Jan 21 14:07:21 crc kubenswrapper[4959]: E0121 14:07:21.217885 4959 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c173e83_4170_4fdd_a499_e05e39cadc5e.slice/crio-885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.616595 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.731611 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-utilities\") pod \"6c173e83-4170-4fdd-a499-e05e39cadc5e\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.731826 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-catalog-content\") pod \"6c173e83-4170-4fdd-a499-e05e39cadc5e\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.732617 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-utilities" (OuterVolumeSpecName: "utilities") pod "6c173e83-4170-4fdd-a499-e05e39cadc5e" (UID: "6c173e83-4170-4fdd-a499-e05e39cadc5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.732676 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scb7p\" (UniqueName: \"kubernetes.io/projected/6c173e83-4170-4fdd-a499-e05e39cadc5e-kube-api-access-scb7p\") pod \"6c173e83-4170-4fdd-a499-e05e39cadc5e\" (UID: \"6c173e83-4170-4fdd-a499-e05e39cadc5e\") " Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.733647 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.739488 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c173e83-4170-4fdd-a499-e05e39cadc5e-kube-api-access-scb7p" (OuterVolumeSpecName: "kube-api-access-scb7p") pod "6c173e83-4170-4fdd-a499-e05e39cadc5e" (UID: "6c173e83-4170-4fdd-a499-e05e39cadc5e"). InnerVolumeSpecName "kube-api-access-scb7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.786995 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c173e83-4170-4fdd-a499-e05e39cadc5e" (UID: "6c173e83-4170-4fdd-a499-e05e39cadc5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.835501 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c173e83-4170-4fdd-a499-e05e39cadc5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:07:21 crc kubenswrapper[4959]: I0121 14:07:21.835552 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scb7p\" (UniqueName: \"kubernetes.io/projected/6c173e83-4170-4fdd-a499-e05e39cadc5e-kube-api-access-scb7p\") on node \"crc\" DevicePath \"\"" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.176766 4959 generic.go:334] "Generic (PLEG): container finished" podID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerID="885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84" exitCode=0 Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.176813 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgv98" event={"ID":"6c173e83-4170-4fdd-a499-e05e39cadc5e","Type":"ContainerDied","Data":"885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84"} Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.176855 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgv98" event={"ID":"6c173e83-4170-4fdd-a499-e05e39cadc5e","Type":"ContainerDied","Data":"ed9962f08fd5df3bd775f7620143fbe237bc5e6e8124103917d9832037526cd2"} Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.176876 4959 scope.go:117] "RemoveContainer" containerID="885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.178058 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgv98" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.197854 4959 scope.go:117] "RemoveContainer" containerID="ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.213814 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgv98"] Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.223023 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rgv98"] Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.243939 4959 scope.go:117] "RemoveContainer" containerID="573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.288378 4959 scope.go:117] "RemoveContainer" containerID="885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84" Jan 21 14:07:22 crc kubenswrapper[4959]: E0121 14:07:22.288906 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84\": container with ID starting with 885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84 not found: ID does not exist" containerID="885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.288957 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84"} err="failed to get container status \"885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84\": rpc error: code = NotFound desc = could not find container \"885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84\": container with ID starting with 885e80957614ea4d960d459211288d05753e41257f481bcae75f8903edaecb84 not found: ID does not exist" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.288988 4959 scope.go:117] "RemoveContainer" containerID="ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60" Jan 21 14:07:22 crc kubenswrapper[4959]: E0121 14:07:22.289616 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60\": container with ID starting with ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60 not found: ID does not exist" containerID="ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.289711 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60"} err="failed to get container status \"ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60\": rpc error: code = NotFound desc = could not find container \"ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60\": container with ID starting with ef21a520ebf9585dedf9a7655a70c8999651cc553a7103c71399fbf6c7d38d60 not found: ID does not exist" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.289756 4959 scope.go:117] "RemoveContainer" containerID="573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042" Jan 21 14:07:22 crc kubenswrapper[4959]: E0121 14:07:22.290190 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042\": container with ID starting with 573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042 not found: ID does not exist" containerID="573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042" Jan 21 14:07:22 crc kubenswrapper[4959]: I0121 14:07:22.290222 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042"} err="failed to get container status \"573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042\": rpc error: code = NotFound desc = could not find container \"573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042\": container with ID starting with 573881993c84e0065cb1d884285be2fb984fe91c826aa9e9a62ec07fa286c042 not found: ID does not exist" Jan 21 14:07:23 crc kubenswrapper[4959]: I0121 14:07:23.298538 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" path="/var/lib/kubelet/pods/6c173e83-4170-4fdd-a499-e05e39cadc5e/volumes" Jan 21 14:07:26 crc kubenswrapper[4959]: I0121 14:07:26.549450 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:26 crc kubenswrapper[4959]: I0121 14:07:26.607696 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:27 crc kubenswrapper[4959]: I0121 14:07:27.250591 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.099622 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7w7p"] Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.100482 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j7w7p" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="registry-server" containerID="cri-o://10f3ce004f4a25f5fa5b265676a55a710d791ef512d1d812d674116150fbde40" gracePeriod=2 Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.254514 4959 generic.go:334] "Generic (PLEG): container finished" podID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerID="10f3ce004f4a25f5fa5b265676a55a710d791ef512d1d812d674116150fbde40" exitCode=0 Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.254583 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerDied","Data":"10f3ce004f4a25f5fa5b265676a55a710d791ef512d1d812d674116150fbde40"} Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.257048 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b61e1395-82fb-4c39-907d-d5aa160aa10f","Type":"ContainerStarted","Data":"7fd6fbf41473e38419f817362be4716c6f6df48d89ad4541a6fc5c60508adb65"} Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.636582 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.658846 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.041936031 podStartE2EDuration="1m2.658829163s" podCreationTimestamp="2026-01-21 14:06:27 +0000 UTC" firstStartedPulling="2026-01-21 14:06:29.630048981 +0000 UTC m=+3450.593079524" lastFinishedPulling="2026-01-21 14:07:27.246942103 +0000 UTC m=+3508.209972656" observedRunningTime="2026-01-21 14:07:29.284703135 +0000 UTC m=+3510.247733678" watchObservedRunningTime="2026-01-21 14:07:29.658829163 +0000 UTC m=+3510.621859706" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.679025 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-catalog-content\") pod \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.679190 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-utilities\") pod \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.679320 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kjqh\" (UniqueName: \"kubernetes.io/projected/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-kube-api-access-5kjqh\") pod \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\" (UID: \"dec2c39d-0b9c-46d0-b17f-f19137bc40eb\") " Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.680808 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-utilities" (OuterVolumeSpecName: "utilities") pod "dec2c39d-0b9c-46d0-b17f-f19137bc40eb" (UID: "dec2c39d-0b9c-46d0-b17f-f19137bc40eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.694413 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-kube-api-access-5kjqh" (OuterVolumeSpecName: "kube-api-access-5kjqh") pod "dec2c39d-0b9c-46d0-b17f-f19137bc40eb" (UID: "dec2c39d-0b9c-46d0-b17f-f19137bc40eb"). InnerVolumeSpecName "kube-api-access-5kjqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.720883 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dec2c39d-0b9c-46d0-b17f-f19137bc40eb" (UID: "dec2c39d-0b9c-46d0-b17f-f19137bc40eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.782581 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.782622 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kjqh\" (UniqueName: \"kubernetes.io/projected/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-kube-api-access-5kjqh\") on node \"crc\" DevicePath \"\"" Jan 21 14:07:29 crc kubenswrapper[4959]: I0121 14:07:29.782633 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec2c39d-0b9c-46d0-b17f-f19137bc40eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.273734 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7w7p" event={"ID":"dec2c39d-0b9c-46d0-b17f-f19137bc40eb","Type":"ContainerDied","Data":"289f333e250c9fd3e7c2e452a4153f4cb8cc39912cf34d039e65c5193bc5655f"} Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.274117 4959 scope.go:117] "RemoveContainer" containerID="10f3ce004f4a25f5fa5b265676a55a710d791ef512d1d812d674116150fbde40" Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.274283 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7w7p" Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.312354 4959 scope.go:117] "RemoveContainer" containerID="bc5cf0d7450be9eb44410a989fb9be3b68d72082ea7658cc161aabb62f3ff89c" Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.320025 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7w7p"] Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.328260 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7w7p"] Jan 21 14:07:30 crc kubenswrapper[4959]: I0121 14:07:30.343184 4959 scope.go:117] "RemoveContainer" containerID="156631ac6ac56ecd2a51ec3bf1abac9e826a72ef82a88cfbfa39df725e892f79" Jan 21 14:07:31 crc kubenswrapper[4959]: I0121 14:07:31.301829 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" path="/var/lib/kubelet/pods/dec2c39d-0b9c-46d0-b17f-f19137bc40eb/volumes" Jan 21 14:08:21 crc kubenswrapper[4959]: I0121 14:08:21.379533 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:08:21 crc kubenswrapper[4959]: I0121 14:08:21.380118 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:08:51 crc kubenswrapper[4959]: I0121 14:08:51.380336 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:08:51 crc kubenswrapper[4959]: I0121 14:08:51.380963 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:09:21 crc kubenswrapper[4959]: I0121 14:09:21.380161 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:09:21 crc kubenswrapper[4959]: I0121 14:09:21.381319 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:09:21 crc kubenswrapper[4959]: I0121 14:09:21.381428 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 14:09:21 crc kubenswrapper[4959]: I0121 14:09:21.382502 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:09:21 crc kubenswrapper[4959]: I0121 14:09:21.382581 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" gracePeriod=600 Jan 21 14:09:29 crc kubenswrapper[4959]: I0121 14:09:29.332534 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-wwkrl_00d99d89-7cdc-410d-b2f3-347be806f79a/machine-config-daemon/13.log" Jan 21 14:09:29 crc kubenswrapper[4959]: I0121 14:09:29.334677 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" exitCode=-1 Jan 21 14:09:29 crc kubenswrapper[4959]: I0121 14:09:29.334753 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4"} Jan 21 14:09:29 crc kubenswrapper[4959]: I0121 14:09:29.334892 4959 scope.go:117] "RemoveContainer" containerID="5c5b09cfc927c1cbd8d659e888e39bb3beea0a8126bf7572b394973bb1f27a34" Jan 21 14:09:33 crc kubenswrapper[4959]: E0121 14:09:33.993685 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:09:34 crc kubenswrapper[4959]: I0121 14:09:34.379317 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:09:34 crc kubenswrapper[4959]: E0121 14:09:34.379856 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:09:47 crc kubenswrapper[4959]: I0121 14:09:47.286157 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:09:47 crc kubenswrapper[4959]: E0121 14:09:47.286855 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:09:58 crc kubenswrapper[4959]: I0121 14:09:58.286245 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:09:58 crc kubenswrapper[4959]: E0121 14:09:58.287185 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:10:13 crc kubenswrapper[4959]: I0121 14:10:13.286737 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:10:13 crc kubenswrapper[4959]: E0121 14:10:13.289990 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:10:28 crc kubenswrapper[4959]: I0121 14:10:28.287183 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:10:28 crc kubenswrapper[4959]: E0121 14:10:28.287939 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:10:39 crc kubenswrapper[4959]: I0121 14:10:39.292630 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:10:39 crc kubenswrapper[4959]: E0121 14:10:39.293476 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:10:53 crc kubenswrapper[4959]: I0121 14:10:53.286478 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:10:53 crc kubenswrapper[4959]: E0121 14:10:53.287279 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:11:06 crc kubenswrapper[4959]: I0121 14:11:06.286658 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:11:06 crc kubenswrapper[4959]: E0121 14:11:06.287617 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:11:17 crc kubenswrapper[4959]: I0121 14:11:17.286283 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:11:17 crc kubenswrapper[4959]: E0121 14:11:17.287033 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:11:31 crc kubenswrapper[4959]: I0121 14:11:31.286847 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:11:31 crc kubenswrapper[4959]: E0121 14:11:31.287733 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.433758 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6fsgm"] Jan 21 14:11:42 crc kubenswrapper[4959]: E0121 14:11:42.434793 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="registry-server" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.434808 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="registry-server" Jan 21 14:11:42 crc kubenswrapper[4959]: E0121 14:11:42.434822 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="extract-utilities" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.434828 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="extract-utilities" Jan 21 14:11:42 crc kubenswrapper[4959]: E0121 14:11:42.434834 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="extract-utilities" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.434841 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="extract-utilities" Jan 21 14:11:42 crc kubenswrapper[4959]: E0121 14:11:42.434855 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="extract-content" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.434860 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="extract-content" Jan 21 14:11:42 crc kubenswrapper[4959]: E0121 14:11:42.434886 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="extract-content" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.434891 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="extract-content" Jan 21 14:11:42 crc kubenswrapper[4959]: E0121 14:11:42.434900 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="registry-server" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.434906 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="registry-server" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.435130 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec2c39d-0b9c-46d0-b17f-f19137bc40eb" containerName="registry-server" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.435145 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c173e83-4170-4fdd-a499-e05e39cadc5e" containerName="registry-server" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.436852 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.452868 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fsgm"] Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.514956 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfs9b\" (UniqueName: \"kubernetes.io/projected/7168eca5-f3b3-49c8-b692-2952868db71f-kube-api-access-xfs9b\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.515032 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-catalog-content\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.515255 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-utilities\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.617172 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfs9b\" (UniqueName: \"kubernetes.io/projected/7168eca5-f3b3-49c8-b692-2952868db71f-kube-api-access-xfs9b\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.617253 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-catalog-content\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.617335 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-utilities\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.617755 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-catalog-content\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.617823 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-utilities\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.644888 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfs9b\" (UniqueName: \"kubernetes.io/projected/7168eca5-f3b3-49c8-b692-2952868db71f-kube-api-access-xfs9b\") pod \"community-operators-6fsgm\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:42 crc kubenswrapper[4959]: I0121 14:11:42.755211 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:11:43 crc kubenswrapper[4959]: I0121 14:11:43.287384 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:11:43 crc kubenswrapper[4959]: E0121 14:11:43.288387 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:11:43 crc kubenswrapper[4959]: I0121 14:11:43.389970 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fsgm"] Jan 21 14:11:44 crc kubenswrapper[4959]: I0121 14:11:44.023993 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fsgm" event={"ID":"7168eca5-f3b3-49c8-b692-2952868db71f","Type":"ContainerStarted","Data":"b421623db26aec556ab9c13d66e7c272c3ed36cca5fe469ea7f0fb7d5a49a3aa"} Jan 21 14:11:45 crc kubenswrapper[4959]: I0121 14:11:45.037198 4959 generic.go:334] "Generic (PLEG): container finished" podID="7168eca5-f3b3-49c8-b692-2952868db71f" containerID="d79722a58ada6503ed56bb0ec1d2118c5478a703022b5315789b580d8985d66e" exitCode=0 Jan 21 14:11:45 crc kubenswrapper[4959]: I0121 14:11:45.037305 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fsgm" event={"ID":"7168eca5-f3b3-49c8-b692-2952868db71f","Type":"ContainerDied","Data":"d79722a58ada6503ed56bb0ec1d2118c5478a703022b5315789b580d8985d66e"} Jan 21 14:11:45 crc kubenswrapper[4959]: I0121 14:11:45.039264 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:11:50 crc kubenswrapper[4959]: I0121 14:11:50.086051 4959 generic.go:334] "Generic (PLEG): container finished" podID="7168eca5-f3b3-49c8-b692-2952868db71f" containerID="b8b4ea379e3fd673d2963ffe3cbb999df0948a3d8bad1837d3b257d5bbcde414" exitCode=0 Jan 21 14:11:50 crc kubenswrapper[4959]: I0121 14:11:50.086706 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fsgm" event={"ID":"7168eca5-f3b3-49c8-b692-2952868db71f","Type":"ContainerDied","Data":"b8b4ea379e3fd673d2963ffe3cbb999df0948a3d8bad1837d3b257d5bbcde414"} Jan 21 14:11:52 crc kubenswrapper[4959]: I0121 14:11:52.763712 4959 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jr2mn" podUID="e5a1db10-de2f-423d-a482-087eb1eaf3d0" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 21 14:11:55 crc kubenswrapper[4959]: I0121 14:11:55.291896 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:11:55 crc kubenswrapper[4959]: E0121 14:11:55.292753 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:11:57 crc kubenswrapper[4959]: I0121 14:11:57.151980 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fsgm" event={"ID":"7168eca5-f3b3-49c8-b692-2952868db71f","Type":"ContainerStarted","Data":"cd450194b097a3f7ef5b8c06bc6ee2d39f1cc8ed9cdd99ad31f01c8e1bc86969"} Jan 21 14:11:57 crc kubenswrapper[4959]: I0121 14:11:57.178550 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6fsgm" podStartSLOduration=3.850908331 podStartE2EDuration="15.178492037s" podCreationTimestamp="2026-01-21 14:11:42 +0000 UTC" firstStartedPulling="2026-01-21 14:11:45.038968807 +0000 UTC m=+3766.001999340" lastFinishedPulling="2026-01-21 14:11:56.366552503 +0000 UTC m=+3777.329583046" observedRunningTime="2026-01-21 14:11:57.171030441 +0000 UTC m=+3778.134061004" watchObservedRunningTime="2026-01-21 14:11:57.178492037 +0000 UTC m=+3778.141522580" Jan 21 14:12:02 crc kubenswrapper[4959]: I0121 14:12:02.756276 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:12:02 crc kubenswrapper[4959]: I0121 14:12:02.756813 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:12:02 crc kubenswrapper[4959]: I0121 14:12:02.863567 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:12:03 crc kubenswrapper[4959]: I0121 14:12:03.427052 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:12:03 crc kubenswrapper[4959]: I0121 14:12:03.483595 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fsgm"] Jan 21 14:12:05 crc kubenswrapper[4959]: I0121 14:12:05.227354 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6fsgm" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="registry-server" containerID="cri-o://cd450194b097a3f7ef5b8c06bc6ee2d39f1cc8ed9cdd99ad31f01c8e1bc86969" gracePeriod=2 Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.240136 4959 generic.go:334] "Generic (PLEG): container finished" podID="7168eca5-f3b3-49c8-b692-2952868db71f" containerID="cd450194b097a3f7ef5b8c06bc6ee2d39f1cc8ed9cdd99ad31f01c8e1bc86969" exitCode=0 Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.240351 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fsgm" event={"ID":"7168eca5-f3b3-49c8-b692-2952868db71f","Type":"ContainerDied","Data":"cd450194b097a3f7ef5b8c06bc6ee2d39f1cc8ed9cdd99ad31f01c8e1bc86969"} Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.384988 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.459266 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfs9b\" (UniqueName: \"kubernetes.io/projected/7168eca5-f3b3-49c8-b692-2952868db71f-kube-api-access-xfs9b\") pod \"7168eca5-f3b3-49c8-b692-2952868db71f\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.459416 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-utilities\") pod \"7168eca5-f3b3-49c8-b692-2952868db71f\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.459541 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-catalog-content\") pod \"7168eca5-f3b3-49c8-b692-2952868db71f\" (UID: \"7168eca5-f3b3-49c8-b692-2952868db71f\") " Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.460853 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-utilities" (OuterVolumeSpecName: "utilities") pod "7168eca5-f3b3-49c8-b692-2952868db71f" (UID: "7168eca5-f3b3-49c8-b692-2952868db71f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.482601 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7168eca5-f3b3-49c8-b692-2952868db71f-kube-api-access-xfs9b" (OuterVolumeSpecName: "kube-api-access-xfs9b") pod "7168eca5-f3b3-49c8-b692-2952868db71f" (UID: "7168eca5-f3b3-49c8-b692-2952868db71f"). InnerVolumeSpecName "kube-api-access-xfs9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.536819 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7168eca5-f3b3-49c8-b692-2952868db71f" (UID: "7168eca5-f3b3-49c8-b692-2952868db71f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.562439 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.562483 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7168eca5-f3b3-49c8-b692-2952868db71f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:12:06 crc kubenswrapper[4959]: I0121 14:12:06.562495 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfs9b\" (UniqueName: \"kubernetes.io/projected/7168eca5-f3b3-49c8-b692-2952868db71f-kube-api-access-xfs9b\") on node \"crc\" DevicePath \"\"" Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.252881 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fsgm" event={"ID":"7168eca5-f3b3-49c8-b692-2952868db71f","Type":"ContainerDied","Data":"b421623db26aec556ab9c13d66e7c272c3ed36cca5fe469ea7f0fb7d5a49a3aa"} Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.253310 4959 scope.go:117] "RemoveContainer" containerID="cd450194b097a3f7ef5b8c06bc6ee2d39f1cc8ed9cdd99ad31f01c8e1bc86969" Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.253000 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fsgm" Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.278467 4959 scope.go:117] "RemoveContainer" containerID="b8b4ea379e3fd673d2963ffe3cbb999df0948a3d8bad1837d3b257d5bbcde414" Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.311331 4959 scope.go:117] "RemoveContainer" containerID="d79722a58ada6503ed56bb0ec1d2118c5478a703022b5315789b580d8985d66e" Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.327555 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fsgm"] Jan 21 14:12:07 crc kubenswrapper[4959]: I0121 14:12:07.340949 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6fsgm"] Jan 21 14:12:08 crc kubenswrapper[4959]: I0121 14:12:08.290078 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:12:08 crc kubenswrapper[4959]: E0121 14:12:08.290698 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:12:09 crc kubenswrapper[4959]: I0121 14:12:09.297228 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" path="/var/lib/kubelet/pods/7168eca5-f3b3-49c8-b692-2952868db71f/volumes" Jan 21 14:12:23 crc kubenswrapper[4959]: I0121 14:12:23.291374 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:12:23 crc kubenswrapper[4959]: E0121 14:12:23.292179 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:12:35 crc kubenswrapper[4959]: I0121 14:12:35.286669 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:12:35 crc kubenswrapper[4959]: E0121 14:12:35.287505 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:12:46 crc kubenswrapper[4959]: I0121 14:12:46.286913 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:12:46 crc kubenswrapper[4959]: E0121 14:12:46.287902 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:12:58 crc kubenswrapper[4959]: I0121 14:12:58.287506 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:12:58 crc kubenswrapper[4959]: E0121 14:12:58.288922 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:13:10 crc kubenswrapper[4959]: I0121 14:13:10.285880 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:13:10 crc kubenswrapper[4959]: E0121 14:13:10.286728 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:13:23 crc kubenswrapper[4959]: I0121 14:13:23.286707 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:13:23 crc kubenswrapper[4959]: E0121 14:13:23.288729 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:13:38 crc kubenswrapper[4959]: I0121 14:13:38.285908 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:13:38 crc kubenswrapper[4959]: E0121 14:13:38.286757 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:13:53 crc kubenswrapper[4959]: I0121 14:13:53.286459 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:13:53 crc kubenswrapper[4959]: E0121 14:13:53.287279 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:13:57 crc kubenswrapper[4959]: I0121 14:13:57.062573 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-jnfwt"] Jan 21 14:13:57 crc kubenswrapper[4959]: I0121 14:13:57.077690 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-jnfwt"] Jan 21 14:13:57 crc kubenswrapper[4959]: I0121 14:13:57.297620 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f71bcf-72a3-4877-bf31-c4c4ae441f70" path="/var/lib/kubelet/pods/44f71bcf-72a3-4877-bf31-c4c4ae441f70/volumes" Jan 21 14:13:59 crc kubenswrapper[4959]: I0121 14:13:59.028816 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-eb24-account-create-update-g2x4m"] Jan 21 14:13:59 crc kubenswrapper[4959]: I0121 14:13:59.039502 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-eb24-account-create-update-g2x4m"] Jan 21 14:13:59 crc kubenswrapper[4959]: I0121 14:13:59.297337 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b2bbae-a802-455f-8aa1-c4b1f744271a" path="/var/lib/kubelet/pods/25b2bbae-a802-455f-8aa1-c4b1f744271a/volumes" Jan 21 14:14:08 crc kubenswrapper[4959]: I0121 14:14:08.286675 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:14:08 crc kubenswrapper[4959]: E0121 14:14:08.287584 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:14:19 crc kubenswrapper[4959]: I0121 14:14:19.292661 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:14:19 crc kubenswrapper[4959]: E0121 14:14:19.293758 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:14:29 crc kubenswrapper[4959]: I0121 14:14:29.079555 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-ft8qw"] Jan 21 14:14:29 crc kubenswrapper[4959]: I0121 14:14:29.087603 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-ft8qw"] Jan 21 14:14:29 crc kubenswrapper[4959]: I0121 14:14:29.305333 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd45b2a-8a39-40f9-9da8-dc9f1330cc11" path="/var/lib/kubelet/pods/4dd45b2a-8a39-40f9-9da8-dc9f1330cc11/volumes" Jan 21 14:14:32 crc kubenswrapper[4959]: I0121 14:14:32.286776 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:14:32 crc kubenswrapper[4959]: E0121 14:14:32.287576 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:14:36 crc kubenswrapper[4959]: I0121 14:14:36.837707 4959 scope.go:117] "RemoveContainer" containerID="6c9629111d28cf2f157e64747735ffc0acccf4240d1f6908945182b7348a2ff0" Jan 21 14:14:36 crc kubenswrapper[4959]: I0121 14:14:36.864785 4959 scope.go:117] "RemoveContainer" containerID="1a56a61ad6a1fae7faf97c96035d45cc3a964e742123c8ceb85b8e3ff267b8f5" Jan 21 14:14:36 crc kubenswrapper[4959]: I0121 14:14:36.937461 4959 scope.go:117] "RemoveContainer" containerID="1ef790c1e4f5dc357b8103435b6560d63b66ba0977ba6140a9b156b1fa8cfc18" Jan 21 14:14:43 crc kubenswrapper[4959]: I0121 14:14:43.286188 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:14:43 crc kubenswrapper[4959]: I0121 14:14:43.806760 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"dd5517f21dd5b8720cc86f6b292858eca8ccea6e245a77483d73e8a935061861"} Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.269539 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdm4h"] Jan 21 14:14:47 crc kubenswrapper[4959]: E0121 14:14:47.271575 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="registry-server" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.271598 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="registry-server" Jan 21 14:14:47 crc kubenswrapper[4959]: E0121 14:14:47.271714 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="extract-content" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.271763 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="extract-content" Jan 21 14:14:47 crc kubenswrapper[4959]: E0121 14:14:47.271811 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="extract-utilities" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.271827 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="extract-utilities" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.275270 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="7168eca5-f3b3-49c8-b692-2952868db71f" containerName="registry-server" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.281303 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.307868 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdm4h"] Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.319477 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-utilities\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.319567 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-catalog-content\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.319663 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pvq\" (UniqueName: \"kubernetes.io/projected/2bbce7b7-0854-40da-a6b0-eeb2783e9548-kube-api-access-t5pvq\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.421616 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-utilities\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.421729 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-catalog-content\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.421810 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5pvq\" (UniqueName: \"kubernetes.io/projected/2bbce7b7-0854-40da-a6b0-eeb2783e9548-kube-api-access-t5pvq\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.422222 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-utilities\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.422252 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-catalog-content\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.459243 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5pvq\" (UniqueName: \"kubernetes.io/projected/2bbce7b7-0854-40da-a6b0-eeb2783e9548-kube-api-access-t5pvq\") pod \"redhat-operators-hdm4h\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:47 crc kubenswrapper[4959]: I0121 14:14:47.619970 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:48 crc kubenswrapper[4959]: I0121 14:14:48.236388 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdm4h"] Jan 21 14:14:48 crc kubenswrapper[4959]: W0121 14:14:48.242395 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bbce7b7_0854_40da_a6b0_eeb2783e9548.slice/crio-42bb6a845377cce2494507979b7d158e25e0c81853ffd2054db2445279aece85 WatchSource:0}: Error finding container 42bb6a845377cce2494507979b7d158e25e0c81853ffd2054db2445279aece85: Status 404 returned error can't find the container with id 42bb6a845377cce2494507979b7d158e25e0c81853ffd2054db2445279aece85 Jan 21 14:14:48 crc kubenswrapper[4959]: I0121 14:14:48.851828 4959 generic.go:334] "Generic (PLEG): container finished" podID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerID="acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3" exitCode=0 Jan 21 14:14:48 crc kubenswrapper[4959]: I0121 14:14:48.851978 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerDied","Data":"acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3"} Jan 21 14:14:48 crc kubenswrapper[4959]: I0121 14:14:48.852324 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerStarted","Data":"42bb6a845377cce2494507979b7d158e25e0c81853ffd2054db2445279aece85"} Jan 21 14:14:51 crc kubenswrapper[4959]: I0121 14:14:51.877086 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerStarted","Data":"a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618"} Jan 21 14:14:54 crc kubenswrapper[4959]: I0121 14:14:54.177709 4959 generic.go:334] "Generic (PLEG): container finished" podID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerID="a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618" exitCode=0 Jan 21 14:14:54 crc kubenswrapper[4959]: I0121 14:14:54.177786 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerDied","Data":"a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618"} Jan 21 14:14:57 crc kubenswrapper[4959]: I0121 14:14:57.206398 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerStarted","Data":"d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb"} Jan 21 14:14:57 crc kubenswrapper[4959]: I0121 14:14:57.231034 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdm4h" podStartSLOduration=3.068467601 podStartE2EDuration="10.231017543s" podCreationTimestamp="2026-01-21 14:14:47 +0000 UTC" firstStartedPulling="2026-01-21 14:14:48.854026236 +0000 UTC m=+3949.817056779" lastFinishedPulling="2026-01-21 14:14:56.016576178 +0000 UTC m=+3956.979606721" observedRunningTime="2026-01-21 14:14:57.227488474 +0000 UTC m=+3958.190519027" watchObservedRunningTime="2026-01-21 14:14:57.231017543 +0000 UTC m=+3958.194048076" Jan 21 14:14:57 crc kubenswrapper[4959]: I0121 14:14:57.620466 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:57 crc kubenswrapper[4959]: I0121 14:14:57.620516 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:14:58 crc kubenswrapper[4959]: I0121 14:14:58.673628 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdm4h" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="registry-server" probeResult="failure" output=< Jan 21 14:14:58 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 14:14:58 crc kubenswrapper[4959]: > Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.207265 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2"] Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.211439 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.222686 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.223741 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.226679 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d827140-b999-45dd-bcb1-403d26501056-secret-volume\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.227877 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d827140-b999-45dd-bcb1-403d26501056-config-volume\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.228010 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbfjc\" (UniqueName: \"kubernetes.io/projected/9d827140-b999-45dd-bcb1-403d26501056-kube-api-access-mbfjc\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.231770 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2"] Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.328926 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d827140-b999-45dd-bcb1-403d26501056-config-volume\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.328996 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbfjc\" (UniqueName: \"kubernetes.io/projected/9d827140-b999-45dd-bcb1-403d26501056-kube-api-access-mbfjc\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.329142 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d827140-b999-45dd-bcb1-403d26501056-secret-volume\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.330482 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d827140-b999-45dd-bcb1-403d26501056-config-volume\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.354117 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d827140-b999-45dd-bcb1-403d26501056-secret-volume\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.354341 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbfjc\" (UniqueName: \"kubernetes.io/projected/9d827140-b999-45dd-bcb1-403d26501056-kube-api-access-mbfjc\") pod \"collect-profiles-29483415-mf8c2\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:00 crc kubenswrapper[4959]: I0121 14:15:00.556558 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:01 crc kubenswrapper[4959]: I0121 14:15:01.134243 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2"] Jan 21 14:15:01 crc kubenswrapper[4959]: I0121 14:15:01.259581 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" event={"ID":"9d827140-b999-45dd-bcb1-403d26501056","Type":"ContainerStarted","Data":"19dc742b500ffba077a63c31ed222cfbb048f43d193fc9812bc3cebb05a4958a"} Jan 21 14:15:02 crc kubenswrapper[4959]: I0121 14:15:02.278127 4959 generic.go:334] "Generic (PLEG): container finished" podID="9d827140-b999-45dd-bcb1-403d26501056" containerID="b2cc35a7e4c2d8b1d9ebfdfc373f7e3f909340c055c00b6bdafcc540a19af4bb" exitCode=0 Jan 21 14:15:02 crc kubenswrapper[4959]: I0121 14:15:02.278190 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" event={"ID":"9d827140-b999-45dd-bcb1-403d26501056","Type":"ContainerDied","Data":"b2cc35a7e4c2d8b1d9ebfdfc373f7e3f909340c055c00b6bdafcc540a19af4bb"} Jan 21 14:15:03 crc kubenswrapper[4959]: I0121 14:15:03.985689 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.126568 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d827140-b999-45dd-bcb1-403d26501056-secret-volume\") pod \"9d827140-b999-45dd-bcb1-403d26501056\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.126952 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d827140-b999-45dd-bcb1-403d26501056-config-volume\") pod \"9d827140-b999-45dd-bcb1-403d26501056\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.127127 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbfjc\" (UniqueName: \"kubernetes.io/projected/9d827140-b999-45dd-bcb1-403d26501056-kube-api-access-mbfjc\") pod \"9d827140-b999-45dd-bcb1-403d26501056\" (UID: \"9d827140-b999-45dd-bcb1-403d26501056\") " Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.127934 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d827140-b999-45dd-bcb1-403d26501056-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d827140-b999-45dd-bcb1-403d26501056" (UID: "9d827140-b999-45dd-bcb1-403d26501056"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.145084 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d827140-b999-45dd-bcb1-403d26501056-kube-api-access-mbfjc" (OuterVolumeSpecName: "kube-api-access-mbfjc") pod "9d827140-b999-45dd-bcb1-403d26501056" (UID: "9d827140-b999-45dd-bcb1-403d26501056"). InnerVolumeSpecName "kube-api-access-mbfjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.147850 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d827140-b999-45dd-bcb1-403d26501056-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d827140-b999-45dd-bcb1-403d26501056" (UID: "9d827140-b999-45dd-bcb1-403d26501056"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.229452 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d827140-b999-45dd-bcb1-403d26501056-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.229493 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbfjc\" (UniqueName: \"kubernetes.io/projected/9d827140-b999-45dd-bcb1-403d26501056-kube-api-access-mbfjc\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.229504 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d827140-b999-45dd-bcb1-403d26501056-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.297981 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" event={"ID":"9d827140-b999-45dd-bcb1-403d26501056","Type":"ContainerDied","Data":"19dc742b500ffba077a63c31ed222cfbb048f43d193fc9812bc3cebb05a4958a"} Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.298025 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19dc742b500ffba077a63c31ed222cfbb048f43d193fc9812bc3cebb05a4958a" Jan 21 14:15:04 crc kubenswrapper[4959]: I0121 14:15:04.298085 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-mf8c2" Jan 21 14:15:05 crc kubenswrapper[4959]: I0121 14:15:05.076326 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx"] Jan 21 14:15:05 crc kubenswrapper[4959]: I0121 14:15:05.086680 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-q8pnx"] Jan 21 14:15:05 crc kubenswrapper[4959]: I0121 14:15:05.302283 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1577aba-031c-4453-88b9-dd1e63e332a1" path="/var/lib/kubelet/pods/f1577aba-031c-4453-88b9-dd1e63e332a1/volumes" Jan 21 14:15:07 crc kubenswrapper[4959]: I0121 14:15:07.670974 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:15:07 crc kubenswrapper[4959]: I0121 14:15:07.724434 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:15:07 crc kubenswrapper[4959]: I0121 14:15:07.904142 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdm4h"] Jan 21 14:15:09 crc kubenswrapper[4959]: I0121 14:15:09.344964 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdm4h" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="registry-server" containerID="cri-o://d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb" gracePeriod=2 Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.185936 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.355853 4959 generic.go:334] "Generic (PLEG): container finished" podID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerID="d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb" exitCode=0 Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.355915 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerDied","Data":"d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb"} Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.355941 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdm4h" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.355977 4959 scope.go:117] "RemoveContainer" containerID="d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.355966 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdm4h" event={"ID":"2bbce7b7-0854-40da-a6b0-eeb2783e9548","Type":"ContainerDied","Data":"42bb6a845377cce2494507979b7d158e25e0c81853ffd2054db2445279aece85"} Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.357141 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-catalog-content\") pod \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.357229 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-utilities\") pod \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.357384 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5pvq\" (UniqueName: \"kubernetes.io/projected/2bbce7b7-0854-40da-a6b0-eeb2783e9548-kube-api-access-t5pvq\") pod \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\" (UID: \"2bbce7b7-0854-40da-a6b0-eeb2783e9548\") " Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.359325 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-utilities" (OuterVolumeSpecName: "utilities") pod "2bbce7b7-0854-40da-a6b0-eeb2783e9548" (UID: "2bbce7b7-0854-40da-a6b0-eeb2783e9548"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.366435 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbce7b7-0854-40da-a6b0-eeb2783e9548-kube-api-access-t5pvq" (OuterVolumeSpecName: "kube-api-access-t5pvq") pod "2bbce7b7-0854-40da-a6b0-eeb2783e9548" (UID: "2bbce7b7-0854-40da-a6b0-eeb2783e9548"). InnerVolumeSpecName "kube-api-access-t5pvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.380507 4959 scope.go:117] "RemoveContainer" containerID="a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.423468 4959 scope.go:117] "RemoveContainer" containerID="acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.459812 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.459844 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5pvq\" (UniqueName: \"kubernetes.io/projected/2bbce7b7-0854-40da-a6b0-eeb2783e9548-kube-api-access-t5pvq\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.464277 4959 scope.go:117] "RemoveContainer" containerID="d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb" Jan 21 14:15:10 crc kubenswrapper[4959]: E0121 14:15:10.464780 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb\": container with ID starting with d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb not found: ID does not exist" containerID="d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.464831 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb"} err="failed to get container status \"d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb\": rpc error: code = NotFound desc = could not find container \"d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb\": container with ID starting with d9798128d84872697ec95b6ff267db35fcbe2fc6a73972d40cab2dfadddf06cb not found: ID does not exist" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.464863 4959 scope.go:117] "RemoveContainer" containerID="a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618" Jan 21 14:15:10 crc kubenswrapper[4959]: E0121 14:15:10.465239 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618\": container with ID starting with a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618 not found: ID does not exist" containerID="a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.465265 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618"} err="failed to get container status \"a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618\": rpc error: code = NotFound desc = could not find container \"a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618\": container with ID starting with a621091a95da296acc1a67900b510a9a4c5a7838c8c37c5937da81ab6f406618 not found: ID does not exist" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.465282 4959 scope.go:117] "RemoveContainer" containerID="acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3" Jan 21 14:15:10 crc kubenswrapper[4959]: E0121 14:15:10.465549 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3\": container with ID starting with acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3 not found: ID does not exist" containerID="acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.465583 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3"} err="failed to get container status \"acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3\": rpc error: code = NotFound desc = could not find container \"acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3\": container with ID starting with acc0bb013923204f342c634ae7325b541d47e3f726454c198d23c40efc866ac3 not found: ID does not exist" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.479210 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bbce7b7-0854-40da-a6b0-eeb2783e9548" (UID: "2bbce7b7-0854-40da-a6b0-eeb2783e9548"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.561702 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bbce7b7-0854-40da-a6b0-eeb2783e9548-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.698243 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdm4h"] Jan 21 14:15:10 crc kubenswrapper[4959]: I0121 14:15:10.707403 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdm4h"] Jan 21 14:15:11 crc kubenswrapper[4959]: I0121 14:15:11.298286 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" path="/var/lib/kubelet/pods/2bbce7b7-0854-40da-a6b0-eeb2783e9548/volumes" Jan 21 14:15:37 crc kubenswrapper[4959]: I0121 14:15:37.123341 4959 scope.go:117] "RemoveContainer" containerID="fdf3de6306960ea44c0741d9ac16a35f697138e43a129dccd6c6f595082d3987" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.106619 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8bqpq"] Jan 21 14:16:49 crc kubenswrapper[4959]: E0121 14:16:49.107761 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="registry-server" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.107790 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="registry-server" Jan 21 14:16:49 crc kubenswrapper[4959]: E0121 14:16:49.107813 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="extract-utilities" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.107822 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="extract-utilities" Jan 21 14:16:49 crc kubenswrapper[4959]: E0121 14:16:49.107853 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="extract-content" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.107862 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="extract-content" Jan 21 14:16:49 crc kubenswrapper[4959]: E0121 14:16:49.107876 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d827140-b999-45dd-bcb1-403d26501056" containerName="collect-profiles" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.107885 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d827140-b999-45dd-bcb1-403d26501056" containerName="collect-profiles" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.108161 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d827140-b999-45dd-bcb1-403d26501056" containerName="collect-profiles" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.108189 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbce7b7-0854-40da-a6b0-eeb2783e9548" containerName="registry-server" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.109894 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.120154 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bqpq"] Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.153808 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkkq5\" (UniqueName: \"kubernetes.io/projected/475507a1-5496-4268-a6bd-8079e16b9341-kube-api-access-qkkq5\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.153897 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-catalog-content\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.154275 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-utilities\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.256690 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-utilities\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.256788 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkq5\" (UniqueName: \"kubernetes.io/projected/475507a1-5496-4268-a6bd-8079e16b9341-kube-api-access-qkkq5\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.256835 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-catalog-content\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.257373 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-utilities\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.257385 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-catalog-content\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.295759 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkq5\" (UniqueName: \"kubernetes.io/projected/475507a1-5496-4268-a6bd-8079e16b9341-kube-api-access-qkkq5\") pod \"redhat-marketplace-8bqpq\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.441178 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:49 crc kubenswrapper[4959]: I0121 14:16:49.944376 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bqpq"] Jan 21 14:16:50 crc kubenswrapper[4959]: I0121 14:16:50.314615 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bqpq" event={"ID":"475507a1-5496-4268-a6bd-8079e16b9341","Type":"ContainerStarted","Data":"85d5d438cb5f8c380a55ba323b7e9e605023a157034298280ace2ec5ed39e382"} Jan 21 14:16:51 crc kubenswrapper[4959]: I0121 14:16:51.323156 4959 generic.go:334] "Generic (PLEG): container finished" podID="475507a1-5496-4268-a6bd-8079e16b9341" containerID="b209e6307e050323b13db74b89946e987d6afbf8c76daaeaf2add3e30583e97e" exitCode=0 Jan 21 14:16:51 crc kubenswrapper[4959]: I0121 14:16:51.323240 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bqpq" event={"ID":"475507a1-5496-4268-a6bd-8079e16b9341","Type":"ContainerDied","Data":"b209e6307e050323b13db74b89946e987d6afbf8c76daaeaf2add3e30583e97e"} Jan 21 14:16:51 crc kubenswrapper[4959]: I0121 14:16:51.379533 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:16:51 crc kubenswrapper[4959]: I0121 14:16:51.379595 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.319486 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cx5gm"] Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.333477 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.352830 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.361234 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cx5gm"] Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.535051 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-catalog-content\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.535126 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-utilities\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.535489 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grw6f\" (UniqueName: \"kubernetes.io/projected/e871d237-d433-4813-b5fd-32aeebc79401-kube-api-access-grw6f\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.638939 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-catalog-content\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.639000 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-utilities\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.639116 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grw6f\" (UniqueName: \"kubernetes.io/projected/e871d237-d433-4813-b5fd-32aeebc79401-kube-api-access-grw6f\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.639517 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-catalog-content\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.640379 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-utilities\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.667016 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grw6f\" (UniqueName: \"kubernetes.io/projected/e871d237-d433-4813-b5fd-32aeebc79401-kube-api-access-grw6f\") pod \"certified-operators-cx5gm\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:52 crc kubenswrapper[4959]: I0121 14:16:52.966140 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:16:53 crc kubenswrapper[4959]: I0121 14:16:53.603902 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cx5gm"] Jan 21 14:16:53 crc kubenswrapper[4959]: W0121 14:16:53.609934 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode871d237_d433_4813_b5fd_32aeebc79401.slice/crio-ca5ab81a844ac91a06d27861dc7b0b502dbbc5778682cd9df768f4472a8921fc WatchSource:0}: Error finding container ca5ab81a844ac91a06d27861dc7b0b502dbbc5778682cd9df768f4472a8921fc: Status 404 returned error can't find the container with id ca5ab81a844ac91a06d27861dc7b0b502dbbc5778682cd9df768f4472a8921fc Jan 21 14:16:54 crc kubenswrapper[4959]: I0121 14:16:54.384532 4959 generic.go:334] "Generic (PLEG): container finished" podID="475507a1-5496-4268-a6bd-8079e16b9341" containerID="39b9b85ba440e739c17176a5d8b7d157c7e9f37d2b44d129c15f03ac6ca2fb02" exitCode=0 Jan 21 14:16:54 crc kubenswrapper[4959]: I0121 14:16:54.384629 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bqpq" event={"ID":"475507a1-5496-4268-a6bd-8079e16b9341","Type":"ContainerDied","Data":"39b9b85ba440e739c17176a5d8b7d157c7e9f37d2b44d129c15f03ac6ca2fb02"} Jan 21 14:16:54 crc kubenswrapper[4959]: I0121 14:16:54.387716 4959 generic.go:334] "Generic (PLEG): container finished" podID="e871d237-d433-4813-b5fd-32aeebc79401" containerID="aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105" exitCode=0 Jan 21 14:16:54 crc kubenswrapper[4959]: I0121 14:16:54.387754 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerDied","Data":"aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105"} Jan 21 14:16:54 crc kubenswrapper[4959]: I0121 14:16:54.387779 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerStarted","Data":"ca5ab81a844ac91a06d27861dc7b0b502dbbc5778682cd9df768f4472a8921fc"} Jan 21 14:16:56 crc kubenswrapper[4959]: I0121 14:16:56.407009 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerStarted","Data":"5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454"} Jan 21 14:16:56 crc kubenswrapper[4959]: I0121 14:16:56.410442 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bqpq" event={"ID":"475507a1-5496-4268-a6bd-8079e16b9341","Type":"ContainerStarted","Data":"94a38f49b67dc74769addee33a6053f3cef52a0dfc34ef184ce9059c6db39558"} Jan 21 14:16:56 crc kubenswrapper[4959]: I0121 14:16:56.451935 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8bqpq" podStartSLOduration=4.321990299 podStartE2EDuration="7.451916475s" podCreationTimestamp="2026-01-21 14:16:49 +0000 UTC" firstStartedPulling="2026-01-21 14:16:52.352499559 +0000 UTC m=+4073.315530102" lastFinishedPulling="2026-01-21 14:16:55.482425735 +0000 UTC m=+4076.445456278" observedRunningTime="2026-01-21 14:16:56.447572646 +0000 UTC m=+4077.410603189" watchObservedRunningTime="2026-01-21 14:16:56.451916475 +0000 UTC m=+4077.414947018" Jan 21 14:16:58 crc kubenswrapper[4959]: I0121 14:16:58.427733 4959 generic.go:334] "Generic (PLEG): container finished" podID="e871d237-d433-4813-b5fd-32aeebc79401" containerID="5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454" exitCode=0 Jan 21 14:16:58 crc kubenswrapper[4959]: I0121 14:16:58.427897 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerDied","Data":"5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454"} Jan 21 14:16:59 crc kubenswrapper[4959]: I0121 14:16:59.442072 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:59 crc kubenswrapper[4959]: I0121 14:16:59.442474 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:16:59 crc kubenswrapper[4959]: I0121 14:16:59.501407 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:17:00 crc kubenswrapper[4959]: I0121 14:17:00.499276 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:17:01 crc kubenswrapper[4959]: I0121 14:17:01.458935 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerStarted","Data":"a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3"} Jan 21 14:17:01 crc kubenswrapper[4959]: I0121 14:17:01.483152 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cx5gm" podStartSLOduration=3.427292469 podStartE2EDuration="9.483133263s" podCreationTimestamp="2026-01-21 14:16:52 +0000 UTC" firstStartedPulling="2026-01-21 14:16:54.389519725 +0000 UTC m=+4075.352550268" lastFinishedPulling="2026-01-21 14:17:00.445360519 +0000 UTC m=+4081.408391062" observedRunningTime="2026-01-21 14:17:01.477189685 +0000 UTC m=+4082.440220228" watchObservedRunningTime="2026-01-21 14:17:01.483133263 +0000 UTC m=+4082.446163796" Jan 21 14:17:02 crc kubenswrapper[4959]: I0121 14:17:02.968055 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:17:02 crc kubenswrapper[4959]: I0121 14:17:02.968430 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.018590 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.089680 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bqpq"] Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.089921 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8bqpq" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="registry-server" containerID="cri-o://94a38f49b67dc74769addee33a6053f3cef52a0dfc34ef184ce9059c6db39558" gracePeriod=2 Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.508996 4959 generic.go:334] "Generic (PLEG): container finished" podID="475507a1-5496-4268-a6bd-8079e16b9341" containerID="94a38f49b67dc74769addee33a6053f3cef52a0dfc34ef184ce9059c6db39558" exitCode=0 Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.509069 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bqpq" event={"ID":"475507a1-5496-4268-a6bd-8079e16b9341","Type":"ContainerDied","Data":"94a38f49b67dc74769addee33a6053f3cef52a0dfc34ef184ce9059c6db39558"} Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.681839 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.707801 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkkq5\" (UniqueName: \"kubernetes.io/projected/475507a1-5496-4268-a6bd-8079e16b9341-kube-api-access-qkkq5\") pod \"475507a1-5496-4268-a6bd-8079e16b9341\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.707994 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-catalog-content\") pod \"475507a1-5496-4268-a6bd-8079e16b9341\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.708090 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-utilities\") pod \"475507a1-5496-4268-a6bd-8079e16b9341\" (UID: \"475507a1-5496-4268-a6bd-8079e16b9341\") " Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.709431 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-utilities" (OuterVolumeSpecName: "utilities") pod "475507a1-5496-4268-a6bd-8079e16b9341" (UID: "475507a1-5496-4268-a6bd-8079e16b9341"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.729274 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475507a1-5496-4268-a6bd-8079e16b9341-kube-api-access-qkkq5" (OuterVolumeSpecName: "kube-api-access-qkkq5") pod "475507a1-5496-4268-a6bd-8079e16b9341" (UID: "475507a1-5496-4268-a6bd-8079e16b9341"). InnerVolumeSpecName "kube-api-access-qkkq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.766695 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "475507a1-5496-4268-a6bd-8079e16b9341" (UID: "475507a1-5496-4268-a6bd-8079e16b9341"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.811697 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.811923 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkkq5\" (UniqueName: \"kubernetes.io/projected/475507a1-5496-4268-a6bd-8079e16b9341-kube-api-access-qkkq5\") on node \"crc\" DevicePath \"\"" Jan 21 14:17:03 crc kubenswrapper[4959]: I0121 14:17:03.812029 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475507a1-5496-4268-a6bd-8079e16b9341-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.520672 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bqpq" event={"ID":"475507a1-5496-4268-a6bd-8079e16b9341","Type":"ContainerDied","Data":"85d5d438cb5f8c380a55ba323b7e9e605023a157034298280ace2ec5ed39e382"} Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.520771 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bqpq" Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.521025 4959 scope.go:117] "RemoveContainer" containerID="94a38f49b67dc74769addee33a6053f3cef52a0dfc34ef184ce9059c6db39558" Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.557271 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bqpq"] Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.561400 4959 scope.go:117] "RemoveContainer" containerID="39b9b85ba440e739c17176a5d8b7d157c7e9f37d2b44d129c15f03ac6ca2fb02" Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.568475 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bqpq"] Jan 21 14:17:04 crc kubenswrapper[4959]: I0121 14:17:04.585450 4959 scope.go:117] "RemoveContainer" containerID="b209e6307e050323b13db74b89946e987d6afbf8c76daaeaf2add3e30583e97e" Jan 21 14:17:05 crc kubenswrapper[4959]: I0121 14:17:05.298476 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475507a1-5496-4268-a6bd-8079e16b9341" path="/var/lib/kubelet/pods/475507a1-5496-4268-a6bd-8079e16b9341/volumes" Jan 21 14:17:13 crc kubenswrapper[4959]: I0121 14:17:13.018835 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:17:13 crc kubenswrapper[4959]: I0121 14:17:13.071661 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cx5gm"] Jan 21 14:17:13 crc kubenswrapper[4959]: I0121 14:17:13.596175 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cx5gm" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="registry-server" containerID="cri-o://a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3" gracePeriod=2 Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.121582 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.132064 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-utilities\") pod \"e871d237-d433-4813-b5fd-32aeebc79401\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.132973 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-utilities" (OuterVolumeSpecName: "utilities") pod "e871d237-d433-4813-b5fd-32aeebc79401" (UID: "e871d237-d433-4813-b5fd-32aeebc79401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.233622 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-catalog-content\") pod \"e871d237-d433-4813-b5fd-32aeebc79401\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.233972 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grw6f\" (UniqueName: \"kubernetes.io/projected/e871d237-d433-4813-b5fd-32aeebc79401-kube-api-access-grw6f\") pod \"e871d237-d433-4813-b5fd-32aeebc79401\" (UID: \"e871d237-d433-4813-b5fd-32aeebc79401\") " Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.234531 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.241621 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e871d237-d433-4813-b5fd-32aeebc79401-kube-api-access-grw6f" (OuterVolumeSpecName: "kube-api-access-grw6f") pod "e871d237-d433-4813-b5fd-32aeebc79401" (UID: "e871d237-d433-4813-b5fd-32aeebc79401"). InnerVolumeSpecName "kube-api-access-grw6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.291586 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e871d237-d433-4813-b5fd-32aeebc79401" (UID: "e871d237-d433-4813-b5fd-32aeebc79401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.337793 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e871d237-d433-4813-b5fd-32aeebc79401-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.337838 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grw6f\" (UniqueName: \"kubernetes.io/projected/e871d237-d433-4813-b5fd-32aeebc79401-kube-api-access-grw6f\") on node \"crc\" DevicePath \"\"" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.606508 4959 generic.go:334] "Generic (PLEG): container finished" podID="e871d237-d433-4813-b5fd-32aeebc79401" containerID="a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3" exitCode=0 Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.606555 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerDied","Data":"a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3"} Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.606566 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx5gm" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.606585 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx5gm" event={"ID":"e871d237-d433-4813-b5fd-32aeebc79401","Type":"ContainerDied","Data":"ca5ab81a844ac91a06d27861dc7b0b502dbbc5778682cd9df768f4472a8921fc"} Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.606607 4959 scope.go:117] "RemoveContainer" containerID="a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.634802 4959 scope.go:117] "RemoveContainer" containerID="5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.653333 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cx5gm"] Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.669832 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cx5gm"] Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.676379 4959 scope.go:117] "RemoveContainer" containerID="aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.720043 4959 scope.go:117] "RemoveContainer" containerID="a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3" Jan 21 14:17:14 crc kubenswrapper[4959]: E0121 14:17:14.721230 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3\": container with ID starting with a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3 not found: ID does not exist" containerID="a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.721275 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3"} err="failed to get container status \"a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3\": rpc error: code = NotFound desc = could not find container \"a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3\": container with ID starting with a0f749b475b3e9254a3eda5be1c7ea02d02ffa55332fe4b3f6c40eb6cca592b3 not found: ID does not exist" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.721302 4959 scope.go:117] "RemoveContainer" containerID="5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454" Jan 21 14:17:14 crc kubenswrapper[4959]: E0121 14:17:14.721893 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454\": container with ID starting with 5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454 not found: ID does not exist" containerID="5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.721922 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454"} err="failed to get container status \"5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454\": rpc error: code = NotFound desc = could not find container \"5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454\": container with ID starting with 5fe01dee4e43bbb7cef21583e1a5753137bf1dc20b6428673ffe6899fdabb454 not found: ID does not exist" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.721939 4959 scope.go:117] "RemoveContainer" containerID="aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105" Jan 21 14:17:14 crc kubenswrapper[4959]: E0121 14:17:14.722209 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105\": container with ID starting with aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105 not found: ID does not exist" containerID="aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105" Jan 21 14:17:14 crc kubenswrapper[4959]: I0121 14:17:14.722232 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105"} err="failed to get container status \"aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105\": rpc error: code = NotFound desc = could not find container \"aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105\": container with ID starting with aaabba4441f51dc6e02cd1223d96dc95331821abe040e0134c43be8fc5fd7105 not found: ID does not exist" Jan 21 14:17:15 crc kubenswrapper[4959]: I0121 14:17:15.297270 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e871d237-d433-4813-b5fd-32aeebc79401" path="/var/lib/kubelet/pods/e871d237-d433-4813-b5fd-32aeebc79401/volumes" Jan 21 14:17:21 crc kubenswrapper[4959]: I0121 14:17:21.379259 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:17:21 crc kubenswrapper[4959]: I0121 14:17:21.379872 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.379265 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.379919 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.380001 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.381018 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd5517f21dd5b8720cc86f6b292858eca8ccea6e245a77483d73e8a935061861"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.381290 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://dd5517f21dd5b8720cc86f6b292858eca8ccea6e245a77483d73e8a935061861" gracePeriod=600 Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.958403 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="dd5517f21dd5b8720cc86f6b292858eca8ccea6e245a77483d73e8a935061861" exitCode=0 Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.958507 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"dd5517f21dd5b8720cc86f6b292858eca8ccea6e245a77483d73e8a935061861"} Jan 21 14:17:51 crc kubenswrapper[4959]: I0121 14:17:51.958785 4959 scope.go:117] "RemoveContainer" containerID="bd649c4d2e377dd05f80d523bb3f8f471308a4a24538884b7d79a62819f447f4" Jan 21 14:17:52 crc kubenswrapper[4959]: I0121 14:17:52.972237 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d"} Jan 21 14:20:21 crc kubenswrapper[4959]: I0121 14:20:21.379837 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:20:21 crc kubenswrapper[4959]: I0121 14:20:21.381648 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:20:51 crc kubenswrapper[4959]: I0121 14:20:51.379625 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:20:51 crc kubenswrapper[4959]: I0121 14:20:51.380278 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.380164 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.380778 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.380842 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.381766 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.381856 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" gracePeriod=600 Jan 21 14:21:21 crc kubenswrapper[4959]: E0121 14:21:21.512979 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.826181 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" exitCode=0 Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.826234 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d"} Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.826280 4959 scope.go:117] "RemoveContainer" containerID="dd5517f21dd5b8720cc86f6b292858eca8ccea6e245a77483d73e8a935061861" Jan 21 14:21:21 crc kubenswrapper[4959]: I0121 14:21:21.827154 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:21:21 crc kubenswrapper[4959]: E0121 14:21:21.827466 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:21:33 crc kubenswrapper[4959]: I0121 14:21:33.286934 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:21:33 crc kubenswrapper[4959]: E0121 14:21:33.287886 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:21:45 crc kubenswrapper[4959]: I0121 14:21:45.286597 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:21:45 crc kubenswrapper[4959]: E0121 14:21:45.288665 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:21:57 crc kubenswrapper[4959]: I0121 14:21:57.286564 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:21:57 crc kubenswrapper[4959]: E0121 14:21:57.287405 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:22:10 crc kubenswrapper[4959]: I0121 14:22:10.286752 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:22:10 crc kubenswrapper[4959]: E0121 14:22:10.288814 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:22:22 crc kubenswrapper[4959]: I0121 14:22:22.287214 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:22:22 crc kubenswrapper[4959]: E0121 14:22:22.288062 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.269067 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wdbmr"] Jan 21 14:22:28 crc kubenswrapper[4959]: E0121 14:22:28.270005 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="extract-content" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270018 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="extract-content" Jan 21 14:22:28 crc kubenswrapper[4959]: E0121 14:22:28.270041 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="extract-utilities" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270047 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="extract-utilities" Jan 21 14:22:28 crc kubenswrapper[4959]: E0121 14:22:28.270061 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="extract-utilities" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270067 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="extract-utilities" Jan 21 14:22:28 crc kubenswrapper[4959]: E0121 14:22:28.270086 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="registry-server" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270092 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="registry-server" Jan 21 14:22:28 crc kubenswrapper[4959]: E0121 14:22:28.270133 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="extract-content" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270140 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="extract-content" Jan 21 14:22:28 crc kubenswrapper[4959]: E0121 14:22:28.270153 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="registry-server" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270159 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="registry-server" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270349 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="475507a1-5496-4268-a6bd-8079e16b9341" containerName="registry-server" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.270375 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="e871d237-d433-4813-b5fd-32aeebc79401" containerName="registry-server" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.271911 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.287673 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wdbmr"] Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.429979 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-utilities\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.430069 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpfh\" (UniqueName: \"kubernetes.io/projected/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-kube-api-access-bnpfh\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.430158 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-catalog-content\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.532449 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnpfh\" (UniqueName: \"kubernetes.io/projected/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-kube-api-access-bnpfh\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.532558 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-catalog-content\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.532708 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-utilities\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.533484 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-catalog-content\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.533555 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-utilities\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.562149 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnpfh\" (UniqueName: \"kubernetes.io/projected/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-kube-api-access-bnpfh\") pod \"community-operators-wdbmr\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:28 crc kubenswrapper[4959]: I0121 14:22:28.595288 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:29 crc kubenswrapper[4959]: I0121 14:22:29.246379 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wdbmr"] Jan 21 14:22:29 crc kubenswrapper[4959]: I0121 14:22:29.737134 4959 generic.go:334] "Generic (PLEG): container finished" podID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerID="1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20" exitCode=0 Jan 21 14:22:29 crc kubenswrapper[4959]: I0121 14:22:29.737260 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerDied","Data":"1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20"} Jan 21 14:22:29 crc kubenswrapper[4959]: I0121 14:22:29.737460 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerStarted","Data":"c4d5bd89dc1f3c72598a018c7e9468758735cb936d95103514c8d1f5806df42c"} Jan 21 14:22:29 crc kubenswrapper[4959]: I0121 14:22:29.740948 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:22:30 crc kubenswrapper[4959]: I0121 14:22:30.746690 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerStarted","Data":"3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661"} Jan 21 14:22:31 crc kubenswrapper[4959]: I0121 14:22:31.760914 4959 generic.go:334] "Generic (PLEG): container finished" podID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerID="3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661" exitCode=0 Jan 21 14:22:31 crc kubenswrapper[4959]: I0121 14:22:31.761011 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerDied","Data":"3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661"} Jan 21 14:22:32 crc kubenswrapper[4959]: I0121 14:22:32.777509 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerStarted","Data":"e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef"} Jan 21 14:22:32 crc kubenswrapper[4959]: I0121 14:22:32.811367 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wdbmr" podStartSLOduration=2.097053233 podStartE2EDuration="4.811344946s" podCreationTimestamp="2026-01-21 14:22:28 +0000 UTC" firstStartedPulling="2026-01-21 14:22:29.740531823 +0000 UTC m=+4410.703562376" lastFinishedPulling="2026-01-21 14:22:32.454823556 +0000 UTC m=+4413.417854089" observedRunningTime="2026-01-21 14:22:32.797226698 +0000 UTC m=+4413.760257271" watchObservedRunningTime="2026-01-21 14:22:32.811344946 +0000 UTC m=+4413.774375479" Jan 21 14:22:34 crc kubenswrapper[4959]: I0121 14:22:34.286593 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:22:34 crc kubenswrapper[4959]: E0121 14:22:34.287143 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:22:38 crc kubenswrapper[4959]: I0121 14:22:38.595435 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:38 crc kubenswrapper[4959]: I0121 14:22:38.595978 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:38 crc kubenswrapper[4959]: I0121 14:22:38.644302 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:38 crc kubenswrapper[4959]: I0121 14:22:38.891565 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:38 crc kubenswrapper[4959]: I0121 14:22:38.943787 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wdbmr"] Jan 21 14:22:40 crc kubenswrapper[4959]: I0121 14:22:40.858299 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wdbmr" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="registry-server" containerID="cri-o://e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef" gracePeriod=2 Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.748524 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.791713 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnpfh\" (UniqueName: \"kubernetes.io/projected/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-kube-api-access-bnpfh\") pod \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.791873 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-catalog-content\") pod \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.791952 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-utilities\") pod \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\" (UID: \"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078\") " Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.793632 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-utilities" (OuterVolumeSpecName: "utilities") pod "cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" (UID: "cb33a9f0-c0f1-4225-bf3f-d830a0bc1078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.797787 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-kube-api-access-bnpfh" (OuterVolumeSpecName: "kube-api-access-bnpfh") pod "cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" (UID: "cb33a9f0-c0f1-4225-bf3f-d830a0bc1078"). InnerVolumeSpecName "kube-api-access-bnpfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.855608 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" (UID: "cb33a9f0-c0f1-4225-bf3f-d830a0bc1078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.893304 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.893334 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.893344 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnpfh\" (UniqueName: \"kubernetes.io/projected/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078-kube-api-access-bnpfh\") on node \"crc\" DevicePath \"\"" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.896125 4959 generic.go:334] "Generic (PLEG): container finished" podID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerID="e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef" exitCode=0 Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.896167 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerDied","Data":"e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef"} Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.896201 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdbmr" event={"ID":"cb33a9f0-c0f1-4225-bf3f-d830a0bc1078","Type":"ContainerDied","Data":"c4d5bd89dc1f3c72598a018c7e9468758735cb936d95103514c8d1f5806df42c"} Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.896221 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdbmr" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.896225 4959 scope.go:117] "RemoveContainer" containerID="e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.930108 4959 scope.go:117] "RemoveContainer" containerID="3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.934915 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wdbmr"] Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.946543 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wdbmr"] Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.953758 4959 scope.go:117] "RemoveContainer" containerID="1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.996425 4959 scope.go:117] "RemoveContainer" containerID="e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef" Jan 21 14:22:43 crc kubenswrapper[4959]: E0121 14:22:43.999634 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef\": container with ID starting with e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef not found: ID does not exist" containerID="e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.999696 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef"} err="failed to get container status \"e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef\": rpc error: code = NotFound desc = could not find container \"e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef\": container with ID starting with e34eba95b2b2e0528d41037f5f955ad4364fdf10ee2ade76e942ea64674727ef not found: ID does not exist" Jan 21 14:22:43 crc kubenswrapper[4959]: I0121 14:22:43.999727 4959 scope.go:117] "RemoveContainer" containerID="3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661" Jan 21 14:22:44 crc kubenswrapper[4959]: E0121 14:22:44.000590 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661\": container with ID starting with 3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661 not found: ID does not exist" containerID="3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661" Jan 21 14:22:44 crc kubenswrapper[4959]: I0121 14:22:44.000630 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661"} err="failed to get container status \"3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661\": rpc error: code = NotFound desc = could not find container \"3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661\": container with ID starting with 3939db53200442996312de04062c6dc9eb2f661021dc2ea647b6ab99feed7661 not found: ID does not exist" Jan 21 14:22:44 crc kubenswrapper[4959]: I0121 14:22:44.000650 4959 scope.go:117] "RemoveContainer" containerID="1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20" Jan 21 14:22:44 crc kubenswrapper[4959]: E0121 14:22:44.001054 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20\": container with ID starting with 1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20 not found: ID does not exist" containerID="1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20" Jan 21 14:22:44 crc kubenswrapper[4959]: I0121 14:22:44.001109 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20"} err="failed to get container status \"1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20\": rpc error: code = NotFound desc = could not find container \"1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20\": container with ID starting with 1e9c9e2bb12ecf84b83e4ec7e24b0963439da821518d9a63872af49a1063cc20 not found: ID does not exist" Jan 21 14:22:45 crc kubenswrapper[4959]: I0121 14:22:45.300918 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" path="/var/lib/kubelet/pods/cb33a9f0-c0f1-4225-bf3f-d830a0bc1078/volumes" Jan 21 14:22:47 crc kubenswrapper[4959]: I0121 14:22:47.315736 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:22:47 crc kubenswrapper[4959]: E0121 14:22:47.316453 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:23:01 crc kubenswrapper[4959]: I0121 14:23:01.287174 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:23:01 crc kubenswrapper[4959]: E0121 14:23:01.287963 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:23:16 crc kubenswrapper[4959]: I0121 14:23:16.291466 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:23:16 crc kubenswrapper[4959]: E0121 14:23:16.293984 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:23:28 crc kubenswrapper[4959]: I0121 14:23:28.286500 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:23:28 crc kubenswrapper[4959]: E0121 14:23:28.287240 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:23:34 crc kubenswrapper[4959]: I0121 14:23:34.328328 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="946dc99c-def1-464a-87fe-7a5a8b46b325" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.235:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:23:34 crc kubenswrapper[4959]: I0121 14:23:34.405267 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="c638106d-abd9-4707-8da4-b5c5d1c30f57" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.236:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:23:43 crc kubenswrapper[4959]: I0121 14:23:43.286652 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:23:43 crc kubenswrapper[4959]: E0121 14:23:43.287524 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:23:56 crc kubenswrapper[4959]: I0121 14:23:56.286276 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:23:56 crc kubenswrapper[4959]: E0121 14:23:56.287027 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:24:08 crc kubenswrapper[4959]: I0121 14:24:08.286417 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:24:08 crc kubenswrapper[4959]: E0121 14:24:08.287160 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:24:23 crc kubenswrapper[4959]: I0121 14:24:23.286346 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:24:23 crc kubenswrapper[4959]: E0121 14:24:23.288366 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:24:35 crc kubenswrapper[4959]: I0121 14:24:35.286986 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:24:35 crc kubenswrapper[4959]: E0121 14:24:35.287781 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:24:49 crc kubenswrapper[4959]: I0121 14:24:49.292528 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:24:49 crc kubenswrapper[4959]: E0121 14:24:49.293365 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:25:01 crc kubenswrapper[4959]: I0121 14:25:01.286124 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:25:01 crc kubenswrapper[4959]: E0121 14:25:01.287044 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:25:14 crc kubenswrapper[4959]: I0121 14:25:14.286654 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:25:14 crc kubenswrapper[4959]: E0121 14:25:14.287579 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:25:26 crc kubenswrapper[4959]: I0121 14:25:26.286795 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:25:26 crc kubenswrapper[4959]: E0121 14:25:26.287919 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:25:35 crc kubenswrapper[4959]: I0121 14:25:35.454334 4959 generic.go:334] "Generic (PLEG): container finished" podID="b61e1395-82fb-4c39-907d-d5aa160aa10f" containerID="7fd6fbf41473e38419f817362be4716c6f6df48d89ad4541a6fc5c60508adb65" exitCode=1 Jan 21 14:25:35 crc kubenswrapper[4959]: I0121 14:25:35.454444 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b61e1395-82fb-4c39-907d-d5aa160aa10f","Type":"ContainerDied","Data":"7fd6fbf41473e38419f817362be4716c6f6df48d89ad4541a6fc5c60508adb65"} Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.911268 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966258 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-workdir\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966428 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ssh-key\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966455 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966532 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-temporary\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966640 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config-secret\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966686 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ca-certs\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966758 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966789 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl76l\" (UniqueName: \"kubernetes.io/projected/b61e1395-82fb-4c39-907d-d5aa160aa10f-kube-api-access-rl76l\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.966827 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-config-data\") pod \"b61e1395-82fb-4c39-907d-d5aa160aa10f\" (UID: \"b61e1395-82fb-4c39-907d-d5aa160aa10f\") " Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.967399 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.968085 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-config-data" (OuterVolumeSpecName: "config-data") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.975366 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.976485 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.978354 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61e1395-82fb-4c39-907d-d5aa160aa10f-kube-api-access-rl76l" (OuterVolumeSpecName: "kube-api-access-rl76l") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "kube-api-access-rl76l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:25:36 crc kubenswrapper[4959]: I0121 14:25:36.996291 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.003379 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.023158 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.024747 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b61e1395-82fb-4c39-907d-d5aa160aa10f" (UID: "b61e1395-82fb-4c39-907d-d5aa160aa10f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070598 4959 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070661 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070684 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl76l\" (UniqueName: \"kubernetes.io/projected/b61e1395-82fb-4c39-907d-d5aa160aa10f-kube-api-access-rl76l\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070700 4959 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b61e1395-82fb-4c39-907d-d5aa160aa10f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070722 4959 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070738 4959 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070852 4959 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070873 4959 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b61e1395-82fb-4c39-907d-d5aa160aa10f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.070886 4959 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b61e1395-82fb-4c39-907d-d5aa160aa10f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.101070 4959 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.172800 4959 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.476892 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b61e1395-82fb-4c39-907d-d5aa160aa10f","Type":"ContainerDied","Data":"5770ee1c1e7d7a36c349afa3526af060b3d953098000cd0773b1adddbb36c752"} Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.477413 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5770ee1c1e7d7a36c349afa3526af060b3d953098000cd0773b1adddbb36c752" Jan 21 14:25:37 crc kubenswrapper[4959]: I0121 14:25:37.476961 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.528441 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 14:25:39 crc kubenswrapper[4959]: E0121 14:25:39.529238 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="registry-server" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.529254 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="registry-server" Jan 21 14:25:39 crc kubenswrapper[4959]: E0121 14:25:39.529275 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61e1395-82fb-4c39-907d-d5aa160aa10f" containerName="tempest-tests-tempest-tests-runner" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.529283 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61e1395-82fb-4c39-907d-d5aa160aa10f" containerName="tempest-tests-tempest-tests-runner" Jan 21 14:25:39 crc kubenswrapper[4959]: E0121 14:25:39.529311 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="extract-content" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.529320 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="extract-content" Jan 21 14:25:39 crc kubenswrapper[4959]: E0121 14:25:39.529329 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="extract-utilities" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.529337 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="extract-utilities" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.529603 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61e1395-82fb-4c39-907d-d5aa160aa10f" containerName="tempest-tests-tempest-tests-runner" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.529634 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb33a9f0-c0f1-4225-bf3f-d830a0bc1078" containerName="registry-server" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.530485 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.544376 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.590290 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-w4l25" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.677035 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhfd\" (UniqueName: \"kubernetes.io/projected/329675f0-0d01-475e-b260-b537a74ad7a1-kube-api-access-7xhfd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.677203 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.779694 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhfd\" (UniqueName: \"kubernetes.io/projected/329675f0-0d01-475e-b260-b537a74ad7a1-kube-api-access-7xhfd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.779820 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.780285 4959 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.800347 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhfd\" (UniqueName: \"kubernetes.io/projected/329675f0-0d01-475e-b260-b537a74ad7a1-kube-api-access-7xhfd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.808654 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"329675f0-0d01-475e-b260-b537a74ad7a1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:39 crc kubenswrapper[4959]: I0121 14:25:39.912134 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:25:40 crc kubenswrapper[4959]: I0121 14:25:40.286188 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:25:40 crc kubenswrapper[4959]: E0121 14:25:40.286854 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:25:40 crc kubenswrapper[4959]: I0121 14:25:40.359437 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 14:25:40 crc kubenswrapper[4959]: I0121 14:25:40.502262 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"329675f0-0d01-475e-b260-b537a74ad7a1","Type":"ContainerStarted","Data":"8c52392b2380019b0b2924c9284e170a71f3217f6094b172bc2f4c1aa85bc447"} Jan 21 14:25:41 crc kubenswrapper[4959]: I0121 14:25:41.512896 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"329675f0-0d01-475e-b260-b537a74ad7a1","Type":"ContainerStarted","Data":"5cf8876ad1f231893a03fb2df9b443a62083f50e4bc473bc7356bc9d09f9f4a7"} Jan 21 14:25:41 crc kubenswrapper[4959]: I0121 14:25:41.534350 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.654436558 podStartE2EDuration="2.534328814s" podCreationTimestamp="2026-01-21 14:25:39 +0000 UTC" firstStartedPulling="2026-01-21 14:25:40.365322204 +0000 UTC m=+4601.328352747" lastFinishedPulling="2026-01-21 14:25:41.24521446 +0000 UTC m=+4602.208245003" observedRunningTime="2026-01-21 14:25:41.527680366 +0000 UTC m=+4602.490710929" watchObservedRunningTime="2026-01-21 14:25:41.534328814 +0000 UTC m=+4602.497359357" Jan 21 14:25:53 crc kubenswrapper[4959]: I0121 14:25:53.287478 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:25:53 crc kubenswrapper[4959]: E0121 14:25:53.288319 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:26:07 crc kubenswrapper[4959]: I0121 14:26:07.287666 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:26:07 crc kubenswrapper[4959]: E0121 14:26:07.288453 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.441338 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cvdt/must-gather-bnfk7"] Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.444184 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.449073 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9cvdt"/"kube-root-ca.crt" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.452508 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9cvdt"/"openshift-service-ca.crt" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.462358 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9cvdt/must-gather-bnfk7"] Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.545599 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e64b729-53c6-481b-8085-d5e100e34d51-must-gather-output\") pod \"must-gather-bnfk7\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.545769 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wk9\" (UniqueName: \"kubernetes.io/projected/4e64b729-53c6-481b-8085-d5e100e34d51-kube-api-access-t9wk9\") pod \"must-gather-bnfk7\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.647991 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e64b729-53c6-481b-8085-d5e100e34d51-must-gather-output\") pod \"must-gather-bnfk7\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.648183 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wk9\" (UniqueName: \"kubernetes.io/projected/4e64b729-53c6-481b-8085-d5e100e34d51-kube-api-access-t9wk9\") pod \"must-gather-bnfk7\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.648649 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e64b729-53c6-481b-8085-d5e100e34d51-must-gather-output\") pod \"must-gather-bnfk7\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.671013 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wk9\" (UniqueName: \"kubernetes.io/projected/4e64b729-53c6-481b-8085-d5e100e34d51-kube-api-access-t9wk9\") pod \"must-gather-bnfk7\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:12 crc kubenswrapper[4959]: I0121 14:26:12.766412 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:26:13 crc kubenswrapper[4959]: I0121 14:26:13.276349 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9cvdt/must-gather-bnfk7"] Jan 21 14:26:13 crc kubenswrapper[4959]: W0121 14:26:13.671750 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e64b729_53c6_481b_8085_d5e100e34d51.slice/crio-a7175ef6c2e4bb927a0c043e43f916901e7d7065a551ec744515c9760c33a960 WatchSource:0}: Error finding container a7175ef6c2e4bb927a0c043e43f916901e7d7065a551ec744515c9760c33a960: Status 404 returned error can't find the container with id a7175ef6c2e4bb927a0c043e43f916901e7d7065a551ec744515c9760c33a960 Jan 21 14:26:13 crc kubenswrapper[4959]: I0121 14:26:13.845940 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" event={"ID":"4e64b729-53c6-481b-8085-d5e100e34d51","Type":"ContainerStarted","Data":"a7175ef6c2e4bb927a0c043e43f916901e7d7065a551ec744515c9760c33a960"} Jan 21 14:26:21 crc kubenswrapper[4959]: I0121 14:26:21.948218 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" event={"ID":"4e64b729-53c6-481b-8085-d5e100e34d51","Type":"ContainerStarted","Data":"bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7"} Jan 21 14:26:21 crc kubenswrapper[4959]: I0121 14:26:21.948793 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" event={"ID":"4e64b729-53c6-481b-8085-d5e100e34d51","Type":"ContainerStarted","Data":"689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b"} Jan 21 14:26:21 crc kubenswrapper[4959]: I0121 14:26:21.981546 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" podStartSLOduration=2.63449043 podStartE2EDuration="9.98151878s" podCreationTimestamp="2026-01-21 14:26:12 +0000 UTC" firstStartedPulling="2026-01-21 14:26:13.675134921 +0000 UTC m=+4634.638165474" lastFinishedPulling="2026-01-21 14:26:21.022163281 +0000 UTC m=+4641.985193824" observedRunningTime="2026-01-21 14:26:21.97062521 +0000 UTC m=+4642.933655763" watchObservedRunningTime="2026-01-21 14:26:21.98151878 +0000 UTC m=+4642.944549323" Jan 21 14:26:22 crc kubenswrapper[4959]: I0121 14:26:22.286490 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:26:22 crc kubenswrapper[4959]: I0121 14:26:22.964978 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"e93b9e76829d9c6d9a5450fccd9269d2e8abae5a4c912581303d59bc6014b9ee"} Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.084353 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-l7hqm"] Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.086224 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.089560 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9cvdt"/"default-dockercfg-fmvq4" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.193757 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4t5p\" (UniqueName: \"kubernetes.io/projected/1b4a163c-0059-4b3c-b716-4591560ce66f-kube-api-access-q4t5p\") pod \"crc-debug-l7hqm\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.194516 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b4a163c-0059-4b3c-b716-4591560ce66f-host\") pod \"crc-debug-l7hqm\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.296195 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b4a163c-0059-4b3c-b716-4591560ce66f-host\") pod \"crc-debug-l7hqm\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.296353 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b4a163c-0059-4b3c-b716-4591560ce66f-host\") pod \"crc-debug-l7hqm\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.296382 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4t5p\" (UniqueName: \"kubernetes.io/projected/1b4a163c-0059-4b3c-b716-4591560ce66f-kube-api-access-q4t5p\") pod \"crc-debug-l7hqm\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.322436 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4t5p\" (UniqueName: \"kubernetes.io/projected/1b4a163c-0059-4b3c-b716-4591560ce66f-kube-api-access-q4t5p\") pod \"crc-debug-l7hqm\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:27 crc kubenswrapper[4959]: I0121 14:26:27.416616 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:26:28 crc kubenswrapper[4959]: I0121 14:26:28.020680 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" event={"ID":"1b4a163c-0059-4b3c-b716-4591560ce66f","Type":"ContainerStarted","Data":"b2f20252ef6040edcf5d54d4832ed677cbda45a1ad793d66e3273ccef89c3189"} Jan 21 14:26:29 crc kubenswrapper[4959]: I0121 14:26:29.914113 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-765d4c965b-4xv4p_cbbb9f49-5dce-421f-895a-8004dce6f9ad/barbican-api-log/0.log" Jan 21 14:26:29 crc kubenswrapper[4959]: I0121 14:26:29.925488 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-765d4c965b-4xv4p_cbbb9f49-5dce-421f-895a-8004dce6f9ad/barbican-api/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.330601 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d8765856b-nw6p9_0cc944bb-6924-4282-bd07-5e221f7c7460/barbican-keystone-listener-log/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.336874 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d8765856b-nw6p9_0cc944bb-6924-4282-bd07-5e221f7c7460/barbican-keystone-listener/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.369481 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84d64bc77f-q9rgh_60523f6c-8c7e-4591-b969-515e6d9ac271/barbican-worker-log/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.381361 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84d64bc77f-q9rgh_60523f6c-8c7e-4591-b969-515e6d9ac271/barbican-worker/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.432120 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5gp5j_5c5e5d36-8ca8-4ee8-ade3-a64be2aadaab/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.530398 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19a25822-a400-4324-80d6-af9aa79d33a0/ceilometer-central-agent/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.558285 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19a25822-a400-4324-80d6-af9aa79d33a0/ceilometer-notification-agent/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.586252 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19a25822-a400-4324-80d6-af9aa79d33a0/sg-core/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.595839 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19a25822-a400-4324-80d6-af9aa79d33a0/proxy-httpd/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.636783 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-7wkj2_56c1b89e-0983-4452-b6ff-ddc66c8dcfc7/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.654719 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jm9fs_517053ca-3edf-40b9-b5a8-715d1f39c4a1/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.699685 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a1e6780a-4202-47ad-a81d-3b4e23e96da4/cinder-api-log/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.783349 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a1e6780a-4202-47ad-a81d-3b4e23e96da4/cinder-api/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.969731 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_c638106d-abd9-4707-8da4-b5c5d1c30f57/cinder-backup/0.log" Jan 21 14:26:30 crc kubenswrapper[4959]: I0121 14:26:30.994057 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_c638106d-abd9-4707-8da4-b5c5d1c30f57/probe/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.041417 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b4abf280-9cc9-46a4-9948-67fdc4e551ab/cinder-scheduler/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.089012 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b4abf280-9cc9-46a4-9948-67fdc4e551ab/probe/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.167422 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_946dc99c-def1-464a-87fe-7a5a8b46b325/cinder-volume/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.199351 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_946dc99c-def1-464a-87fe-7a5a8b46b325/probe/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.232484 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rtx7j_8ca1b87c-733e-4b60-b3f9-c8efd8c56527/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.272182 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-25855_50dd9f09-bbb6-4caf-a0d7-19a991752a70/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.310806 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-g8mkx_b2f855a6-8d49-4c3c-97c2-ce1aee877c27/dnsmasq-dns/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.317794 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-g8mkx_b2f855a6-8d49-4c3c-97c2-ce1aee877c27/init/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.336182 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7dbd37aa-b9e2-4d8b-a249-ea87147b176f/glance-log/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.351786 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7dbd37aa-b9e2-4d8b-a249-ea87147b176f/glance-httpd/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.367550 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ba76fdb1-6ada-495d-8846-35cd2cb9bb4e/glance-log/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.383219 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ba76fdb1-6ada-495d-8846-35cd2cb9bb4e/glance-httpd/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.610605 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77d7b8cf98-z8vcx_79766a29-f585-4567-b158-2506c12277cb/horizon-log/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.716474 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77d7b8cf98-z8vcx_79766a29-f585-4567-b158-2506c12277cb/horizon/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.736952 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mncfl_6d4c9b96-2baa-4fa3-92f4-b263c4123fec/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:31 crc kubenswrapper[4959]: I0121 14:26:31.778402 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mhg55_48ab618a-8037-4dc8-ae21-b2e16c55aa47/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:32 crc kubenswrapper[4959]: I0121 14:26:32.962092 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cf9fc5bf6-zcb4m_2f12325a-947b-48c4-af78-286eec8a25f8/keystone-api/0.log" Jan 21 14:26:32 crc kubenswrapper[4959]: I0121 14:26:32.984750 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483401-4qt94_9ecf462b-1ac7-4465-a213-fb1ffcb3c8b3/keystone-cron/0.log" Jan 21 14:26:32 crc kubenswrapper[4959]: I0121 14:26:32.995912 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_aaca072b-9250-426c-8ce9-0982f870f2c0/kube-state-metrics/0.log" Jan 21 14:26:33 crc kubenswrapper[4959]: I0121 14:26:33.055359 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-xl7b8_7f2ea3fd-7ce6-4792-b694-c174b9dd1475/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:26:34 crc kubenswrapper[4959]: I0121 14:26:34.053228 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f6d5e362-311d-4ea4-97bf-c1550267ab81/manila-api-log/0.log" Jan 21 14:26:34 crc kubenswrapper[4959]: I0121 14:26:34.323774 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f6d5e362-311d-4ea4-97bf-c1550267ab81/manila-api/0.log" Jan 21 14:26:34 crc kubenswrapper[4959]: I0121 14:26:34.527725 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ce9ec2b6-d7fe-407d-896d-df14df1b2c66/manila-scheduler/0.log" Jan 21 14:26:34 crc kubenswrapper[4959]: I0121 14:26:34.536830 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ce9ec2b6-d7fe-407d-896d-df14df1b2c66/probe/0.log" Jan 21 14:26:34 crc kubenswrapper[4959]: I0121 14:26:34.811925 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b/manila-share/0.log" Jan 21 14:26:34 crc kubenswrapper[4959]: I0121 14:26:34.821047 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_10bce8dd-ad00-47b7-b912-b4ecc3f2cb2b/probe/0.log" Jan 21 14:26:42 crc kubenswrapper[4959]: I0121 14:26:42.200577 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" event={"ID":"1b4a163c-0059-4b3c-b716-4591560ce66f","Type":"ContainerStarted","Data":"e4d9c982f47173f74b26a47aaf2c6390b8ed75f4286cec4ad89de962185185d7"} Jan 21 14:26:42 crc kubenswrapper[4959]: I0121 14:26:42.214840 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" podStartSLOduration=1.093111385 podStartE2EDuration="15.214819169s" podCreationTimestamp="2026-01-21 14:26:27 +0000 UTC" firstStartedPulling="2026-01-21 14:26:27.460408695 +0000 UTC m=+4648.423439238" lastFinishedPulling="2026-01-21 14:26:41.582116479 +0000 UTC m=+4662.545147022" observedRunningTime="2026-01-21 14:26:42.214086629 +0000 UTC m=+4663.177117182" watchObservedRunningTime="2026-01-21 14:26:42.214819169 +0000 UTC m=+4663.177849712" Jan 21 14:26:53 crc kubenswrapper[4959]: I0121 14:26:53.804221 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sfwv8_40b8577e-ef7c-4aaa-abb5-fca4b4ea2173/controller/0.log" Jan 21 14:26:53 crc kubenswrapper[4959]: I0121 14:26:53.812648 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sfwv8_40b8577e-ef7c-4aaa-abb5-fca4b4ea2173/kube-rbac-proxy/0.log" Jan 21 14:26:54 crc kubenswrapper[4959]: I0121 14:26:54.632311 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/controller/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.337441 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/frr/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.404260 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/reloader/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.411334 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/frr-metrics/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.421684 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/kube-rbac-proxy/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.433217 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/kube-rbac-proxy-frr/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.445707 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-frr-files/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.457403 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-reloader/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.471576 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-metrics/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.488530 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wn2jb_8c501f4c-58de-43a4-80c2-5268f10bca20/frr-k8s-webhook-server/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.536283 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-55559dddc4-94kkl_6dd40e28-d4df-4cae-b104-773876261939/manager/0.log" Jan 21 14:26:57 crc kubenswrapper[4959]: I0121 14:26:57.552782 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8489dff5dc-5dz6x_6b055f38-5f56-4bb9-bfd6-25fb04003144/webhook-server/0.log" Jan 21 14:26:58 crc kubenswrapper[4959]: I0121 14:26:58.180263 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hdghc_8d3c9089-9424-4aca-87fb-20992ea6ed12/speaker/0.log" Jan 21 14:26:58 crc kubenswrapper[4959]: I0121 14:26:58.188929 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hdghc_8d3c9089-9424-4aca-87fb-20992ea6ed12/kube-rbac-proxy/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.067542 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/extract/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.112308 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/util/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.125271 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/pull/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.482055 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-f49mc_a588ba98-33be-46aa-a582-4403d3a09a95/manager/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.614526 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-64kwb_cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7/manager/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.632860 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-d69ql_988f7f11-664f-4f70-9b38-2852dd3b17a0/manager/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.788239 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-pp9dq_8075108b-d9e1-40d4-9e2e-4faa59061778/manager/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.800303 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-n54x8_da20d161-5e78-4c3d-a021-75244caefb16/manager/0.log" Jan 21 14:27:00 crc kubenswrapper[4959]: I0121 14:27:00.858266 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g4bs8_a6ef5ba7-019c-416f-9003-54c5ce70f01a/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.318612 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-96qjh_dd86c02d-b4ab-42e5-9a16-a968c0aeba96/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.340978 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-fzq84_ed8f3e55-7ed1-4794-8171-461cf3ebc132/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.453421 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-dfzqp_49ec4962-8c60-4bd2-9ada-8f25cc21baa4/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.588890 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-c5fd576c9-gkv5c_fb0839da-0f44-43dd-a240-72c0f032f30a/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.665937 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5t88r_d3753491-e2ab-4cf4-b8be-7de464734343/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.801283 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-h4c6v_a24ac487-ea43-40fd-b6ea-cd7740cf80ce/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.954916 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-crgjd_ae0b11f6-2763-4884-b37b-ec8dc6548a79/manager/0.log" Jan 21 14:27:01 crc kubenswrapper[4959]: I0121 14:27:01.994586 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kslqm_3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d/manager/0.log" Jan 21 14:27:02 crc kubenswrapper[4959]: I0121 14:27:02.014085 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8548fndp_db113188-8b44-43d6-8e79-8231fbfff914/manager/0.log" Jan 21 14:27:02 crc kubenswrapper[4959]: I0121 14:27:02.236219 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6c8559dcdb-l5dgc_106d5d1f-03fd-4706-96e9-f56588efc2ef/operator/0.log" Jan 21 14:27:02 crc kubenswrapper[4959]: I0121 14:27:02.889123 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bedefb76-bb9d-46cf-87e9-f8001ff9ce64/memcached/0.log" Jan 21 14:27:03 crc kubenswrapper[4959]: I0121 14:27:03.083878 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77cdb9766f-rtq4k_3913238c-8062-4839-9106-ce99f45ccadf/neutron-api/0.log" Jan 21 14:27:03 crc kubenswrapper[4959]: I0121 14:27:03.130592 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77cdb9766f-rtq4k_3913238c-8062-4839-9106-ce99f45ccadf/neutron-httpd/0.log" Jan 21 14:27:03 crc kubenswrapper[4959]: I0121 14:27:03.198162 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ljm4x_14765967-d282-4405-8ad4-03c801137ed7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:03 crc kubenswrapper[4959]: I0121 14:27:03.404473 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_49b276b1-4fc7-4b32-88ba-2797fb6dcf0d/nova-api-log/0.log" Jan 21 14:27:03 crc kubenswrapper[4959]: I0121 14:27:03.964232 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_49b276b1-4fc7-4b32-88ba-2797fb6dcf0d/nova-api-api/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.148220 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3df244ce-5f7c-4173-8ec6-d64e2a462876/nova-cell0-conductor-conductor/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.290086 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_da62b0b7-89d1-4932-a64e-004d8aa58035/nova-cell1-conductor-conductor/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.405461 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bd5c98d7d-k5z9b_c8660b47-58d0-48c2-8359-ec471c30158a/manager/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.412878 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3bb6dcfe-bdb9-40c9-9ae5-5669adb7f897/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.418062 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v5rrt_a402b706-070e-44a8-b298-231e0e20af75/registry-server/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.493163 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-qqkvq_d259192c-0b25-4615-b3c1-473a23e9facf/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.494757 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6lsrf_1c5d42e4-5a3b-4cea-b0a7-3f334d801f22/manager/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.547513 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-6wjxl_2776361f-f7a5-452f-b847-f1370993200b/manager/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.585519 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fp76n_b5d1151c-e9f0-4bc3-b0da-b3df5470a149/operator/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.592041 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d1a88128-2d1e-45a7-b5b2-3e9dd073611e/nova-metadata-log/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.595748 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-hs86v_082d43b2-0714-47d3-9f71-9d386e89b56f/manager/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.762811 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-qzds5_061f7370-4309-4e68-97f3-f57e9832939b/manager/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.779412 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-pcxjt_9247c01e-fd0d-4fe6-8a9b-f50dec002cac/manager/0.log" Jan 21 14:27:04 crc kubenswrapper[4959]: I0121 14:27:04.800513 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-6jp8j_747460d1-12de-4c88-b0d8-879ff7b62834/manager/0.log" Jan 21 14:27:06 crc kubenswrapper[4959]: I0121 14:27:06.758813 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d1a88128-2d1e-45a7-b5b2-3e9dd073611e/nova-metadata-metadata/0.log" Jan 21 14:27:06 crc kubenswrapper[4959]: I0121 14:27:06.925329 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c1c97e4a-5c7a-435a-936d-37db58539c69/nova-scheduler-scheduler/0.log" Jan 21 14:27:06 crc kubenswrapper[4959]: I0121 14:27:06.988570 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60b98eb7-0886-4619-afde-c4fb7c5ad7c4/galera/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.030003 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60b98eb7-0886-4619-afde-c4fb7c5ad7c4/mysql-bootstrap/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.065177 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d467040c-ef01-4a64-9d0e-bce50426c248/galera/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.078740 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d467040c-ef01-4a64-9d0e-bce50426c248/mysql-bootstrap/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.085591 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_778a7738-b71c-4f16-a695-4b6155aad41a/openstackclient/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.105753 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dvtct_f68f9349-c3ff-4aef-93fc-69cf9e4c2541/openstack-network-exporter/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.117644 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nqz7q_77986c63-ba96-4c22-9b51-925c5b43b092/ovn-controller/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.146797 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-945nd_527befc1-b6a0-41ae-9b03-9057b0dbfe19/ovsdb-server/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.156076 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-945nd_527befc1-b6a0-41ae-9b03-9057b0dbfe19/ovs-vswitchd/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.162616 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-945nd_527befc1-b6a0-41ae-9b03-9057b0dbfe19/ovsdb-server-init/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.197962 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5l5sp_eac779f4-36cd-4bfe-9fdf-ec8dc11c0e49/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.209687 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8e2c3d3c-262c-478b-a773-10213c66032e/ovn-northd/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.215851 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8e2c3d3c-262c-478b-a773-10213c66032e/openstack-network-exporter/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.260748 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b481c4d6-1f2e-40e5-a27b-3f840055418a/ovsdbserver-nb/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.266579 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b481c4d6-1f2e-40e5-a27b-3f840055418a/openstack-network-exporter/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.282906 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4990579d-d1cf-412f-8246-72396bc8fb1a/ovsdbserver-sb/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.291825 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4990579d-d1cf-412f-8246-72396bc8fb1a/openstack-network-exporter/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.350397 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cb6d76584-4n6sf_8d68c35e-a1e4-46f6-a8d3-29cc8206eab3/placement-log/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.449201 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cb6d76584-4n6sf_8d68c35e-a1e4-46f6-a8d3-29cc8206eab3/placement-api/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.550397 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_98e47fb2-a96f-4e35-8d32-1226689833b0/rabbitmq/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.555895 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_98e47fb2-a96f-4e35-8d32-1226689833b0/setup-container/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.591899 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d94ce670-7f1f-426a-a78f-5b62cf5919cf/rabbitmq/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.596758 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d94ce670-7f1f-426a-a78f-5b62cf5919cf/setup-container/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.617298 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-f4wvc_1a205c16-e1de-4ea3-af09-c17d2daf0bdf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.628566 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-h8h9p_544cfb93-3a88-4fe7-b4b4-b02447782767/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.654336 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-55284_fbcf281a-ccb1-4740-a8f9-06dcdba80445/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.668469 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7dfqd_7262c849-02b2-4abd-a73a-0b6ca5784de3/ssh-known-hosts-edpm-deployment/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.774340 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b61e1395-82fb-4c39-907d-d5aa160aa10f/tempest-tests-tempest-tests-runner/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.781534 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_329675f0-0d01-475e-b260-b537a74ad7a1/test-operator-logs-container/0.log" Jan 21 14:27:07 crc kubenswrapper[4959]: I0121 14:27:07.800313 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5gsjf_127adb14-2780-4fe5-ad99-51f928db6ab8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:27:10 crc kubenswrapper[4959]: I0121 14:27:10.175260 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8hcw2_feec37c0-15ae-4bcf-af2c-1c1622f0edd4/control-plane-machine-set-operator/0.log" Jan 21 14:27:10 crc kubenswrapper[4959]: I0121 14:27:10.191755 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tlw47_e8fbacbf-6d70-4d37-a123-30151512cf5f/kube-rbac-proxy/0.log" Jan 21 14:27:10 crc kubenswrapper[4959]: I0121 14:27:10.203056 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tlw47_e8fbacbf-6d70-4d37-a123-30151512cf5f/machine-api-operator/0.log" Jan 21 14:27:39 crc kubenswrapper[4959]: I0121 14:27:39.877232 4959 generic.go:334] "Generic (PLEG): container finished" podID="1b4a163c-0059-4b3c-b716-4591560ce66f" containerID="e4d9c982f47173f74b26a47aaf2c6390b8ed75f4286cec4ad89de962185185d7" exitCode=0 Jan 21 14:27:39 crc kubenswrapper[4959]: I0121 14:27:39.877264 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" event={"ID":"1b4a163c-0059-4b3c-b716-4591560ce66f","Type":"ContainerDied","Data":"e4d9c982f47173f74b26a47aaf2c6390b8ed75f4286cec4ad89de962185185d7"} Jan 21 14:27:40 crc kubenswrapper[4959]: I0121 14:27:40.990344 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.028428 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-l7hqm"] Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.037534 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-l7hqm"] Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.054125 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4t5p\" (UniqueName: \"kubernetes.io/projected/1b4a163c-0059-4b3c-b716-4591560ce66f-kube-api-access-q4t5p\") pod \"1b4a163c-0059-4b3c-b716-4591560ce66f\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.054448 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b4a163c-0059-4b3c-b716-4591560ce66f-host\") pod \"1b4a163c-0059-4b3c-b716-4591560ce66f\" (UID: \"1b4a163c-0059-4b3c-b716-4591560ce66f\") " Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.054572 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b4a163c-0059-4b3c-b716-4591560ce66f-host" (OuterVolumeSpecName: "host") pod "1b4a163c-0059-4b3c-b716-4591560ce66f" (UID: "1b4a163c-0059-4b3c-b716-4591560ce66f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.055208 4959 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b4a163c-0059-4b3c-b716-4591560ce66f-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.060765 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4a163c-0059-4b3c-b716-4591560ce66f-kube-api-access-q4t5p" (OuterVolumeSpecName: "kube-api-access-q4t5p") pod "1b4a163c-0059-4b3c-b716-4591560ce66f" (UID: "1b4a163c-0059-4b3c-b716-4591560ce66f"). InnerVolumeSpecName "kube-api-access-q4t5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.158868 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4t5p\" (UniqueName: \"kubernetes.io/projected/1b4a163c-0059-4b3c-b716-4591560ce66f-kube-api-access-q4t5p\") on node \"crc\" DevicePath \"\"" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.296918 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4a163c-0059-4b3c-b716-4591560ce66f" path="/var/lib/kubelet/pods/1b4a163c-0059-4b3c-b716-4591560ce66f/volumes" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.895586 4959 scope.go:117] "RemoveContainer" containerID="e4d9c982f47173f74b26a47aaf2c6390b8ed75f4286cec4ad89de962185185d7" Jan 21 14:27:41 crc kubenswrapper[4959]: I0121 14:27:41.895626 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-l7hqm" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.192874 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-gsgjs"] Jan 21 14:27:42 crc kubenswrapper[4959]: E0121 14:27:42.194672 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4a163c-0059-4b3c-b716-4591560ce66f" containerName="container-00" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.194783 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4a163c-0059-4b3c-b716-4591560ce66f" containerName="container-00" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.195110 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4a163c-0059-4b3c-b716-4591560ce66f" containerName="container-00" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.195973 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.198011 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9cvdt"/"default-dockercfg-fmvq4" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.290411 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crh6q\" (UniqueName: \"kubernetes.io/projected/b5069dec-1f15-4db7-a7fd-31f749bec0e0-kube-api-access-crh6q\") pod \"crc-debug-gsgjs\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.290812 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5069dec-1f15-4db7-a7fd-31f749bec0e0-host\") pod \"crc-debug-gsgjs\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.392287 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5069dec-1f15-4db7-a7fd-31f749bec0e0-host\") pod \"crc-debug-gsgjs\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.392745 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crh6q\" (UniqueName: \"kubernetes.io/projected/b5069dec-1f15-4db7-a7fd-31f749bec0e0-kube-api-access-crh6q\") pod \"crc-debug-gsgjs\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.393433 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5069dec-1f15-4db7-a7fd-31f749bec0e0-host\") pod \"crc-debug-gsgjs\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.466651 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crh6q\" (UniqueName: \"kubernetes.io/projected/b5069dec-1f15-4db7-a7fd-31f749bec0e0-kube-api-access-crh6q\") pod \"crc-debug-gsgjs\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.513658 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:42 crc kubenswrapper[4959]: I0121 14:27:42.910112 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" event={"ID":"b5069dec-1f15-4db7-a7fd-31f749bec0e0","Type":"ContainerStarted","Data":"7f211d8d25fe84449465b3b85e4f88f3ce2bc9241cf50c178ff83adaa20b8865"} Jan 21 14:27:43 crc kubenswrapper[4959]: I0121 14:27:43.920534 4959 generic.go:334] "Generic (PLEG): container finished" podID="b5069dec-1f15-4db7-a7fd-31f749bec0e0" containerID="a45342076d7d85ace0863f482f29e629474e652a06f16721597f24573d5e8d39" exitCode=0 Jan 21 14:27:43 crc kubenswrapper[4959]: I0121 14:27:43.920602 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" event={"ID":"b5069dec-1f15-4db7-a7fd-31f749bec0e0","Type":"ContainerDied","Data":"a45342076d7d85ace0863f482f29e629474e652a06f16721597f24573d5e8d39"} Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.048128 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.144204 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5069dec-1f15-4db7-a7fd-31f749bec0e0-host\") pod \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.144314 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5069dec-1f15-4db7-a7fd-31f749bec0e0-host" (OuterVolumeSpecName: "host") pod "b5069dec-1f15-4db7-a7fd-31f749bec0e0" (UID: "b5069dec-1f15-4db7-a7fd-31f749bec0e0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.144393 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crh6q\" (UniqueName: \"kubernetes.io/projected/b5069dec-1f15-4db7-a7fd-31f749bec0e0-kube-api-access-crh6q\") pod \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\" (UID: \"b5069dec-1f15-4db7-a7fd-31f749bec0e0\") " Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.147560 4959 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5069dec-1f15-4db7-a7fd-31f749bec0e0-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.173491 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5069dec-1f15-4db7-a7fd-31f749bec0e0-kube-api-access-crh6q" (OuterVolumeSpecName: "kube-api-access-crh6q") pod "b5069dec-1f15-4db7-a7fd-31f749bec0e0" (UID: "b5069dec-1f15-4db7-a7fd-31f749bec0e0"). InnerVolumeSpecName "kube-api-access-crh6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.249186 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crh6q\" (UniqueName: \"kubernetes.io/projected/b5069dec-1f15-4db7-a7fd-31f749bec0e0-kube-api-access-crh6q\") on node \"crc\" DevicePath \"\"" Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.936136 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" event={"ID":"b5069dec-1f15-4db7-a7fd-31f749bec0e0","Type":"ContainerDied","Data":"7f211d8d25fe84449465b3b85e4f88f3ce2bc9241cf50c178ff83adaa20b8865"} Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.936180 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f211d8d25fe84449465b3b85e4f88f3ce2bc9241cf50c178ff83adaa20b8865" Jan 21 14:27:45 crc kubenswrapper[4959]: I0121 14:27:45.936190 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-gsgjs" Jan 21 14:27:46 crc kubenswrapper[4959]: I0121 14:27:46.043041 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-gsgjs"] Jan 21 14:27:46 crc kubenswrapper[4959]: I0121 14:27:46.050875 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-gsgjs"] Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.221912 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-5p4vl"] Jan 21 14:27:47 crc kubenswrapper[4959]: E0121 14:27:47.222559 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5069dec-1f15-4db7-a7fd-31f749bec0e0" containerName="container-00" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.222572 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5069dec-1f15-4db7-a7fd-31f749bec0e0" containerName="container-00" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.222750 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5069dec-1f15-4db7-a7fd-31f749bec0e0" containerName="container-00" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.223397 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.226923 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9cvdt"/"default-dockercfg-fmvq4" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.288475 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547ca3d1-b634-4985-ae1b-da2dd67f1659-host\") pod \"crc-debug-5p4vl\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.288743 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9662k\" (UniqueName: \"kubernetes.io/projected/547ca3d1-b634-4985-ae1b-da2dd67f1659-kube-api-access-9662k\") pod \"crc-debug-5p4vl\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.298552 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5069dec-1f15-4db7-a7fd-31f749bec0e0" path="/var/lib/kubelet/pods/b5069dec-1f15-4db7-a7fd-31f749bec0e0/volumes" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.390792 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9662k\" (UniqueName: \"kubernetes.io/projected/547ca3d1-b634-4985-ae1b-da2dd67f1659-kube-api-access-9662k\") pod \"crc-debug-5p4vl\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.391014 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547ca3d1-b634-4985-ae1b-da2dd67f1659-host\") pod \"crc-debug-5p4vl\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.391634 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547ca3d1-b634-4985-ae1b-da2dd67f1659-host\") pod \"crc-debug-5p4vl\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:47 crc kubenswrapper[4959]: I0121 14:27:47.898628 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9662k\" (UniqueName: \"kubernetes.io/projected/547ca3d1-b634-4985-ae1b-da2dd67f1659-kube-api-access-9662k\") pod \"crc-debug-5p4vl\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:48 crc kubenswrapper[4959]: I0121 14:27:48.142646 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:48 crc kubenswrapper[4959]: I0121 14:27:48.965567 4959 generic.go:334] "Generic (PLEG): container finished" podID="547ca3d1-b634-4985-ae1b-da2dd67f1659" containerID="79c421b6de6796b29a00d57425da26dcb9412f3eeefbdcf90d30fb0af74ebff8" exitCode=0 Jan 21 14:27:48 crc kubenswrapper[4959]: I0121 14:27:48.965664 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" event={"ID":"547ca3d1-b634-4985-ae1b-da2dd67f1659","Type":"ContainerDied","Data":"79c421b6de6796b29a00d57425da26dcb9412f3eeefbdcf90d30fb0af74ebff8"} Jan 21 14:27:48 crc kubenswrapper[4959]: I0121 14:27:48.974086 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" event={"ID":"547ca3d1-b634-4985-ae1b-da2dd67f1659","Type":"ContainerStarted","Data":"b4153c5a1cb33d29354e0b4b3deadd937e36ce9cb97b3b9f3eb6a3d45153725a"} Jan 21 14:27:49 crc kubenswrapper[4959]: I0121 14:27:49.010080 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-5p4vl"] Jan 21 14:27:49 crc kubenswrapper[4959]: I0121 14:27:49.041052 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cvdt/crc-debug-5p4vl"] Jan 21 14:27:49 crc kubenswrapper[4959]: I0121 14:27:49.527375 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7jsww_c485a4a8-e2c1-4f29-aec5-0712e70756da/cert-manager-controller/0.log" Jan 21 14:27:49 crc kubenswrapper[4959]: I0121 14:27:49.545073 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8mqxm_ab2d47a6-0c67-4286-bf53-c32a798cccb6/cert-manager-cainjector/0.log" Jan 21 14:27:49 crc kubenswrapper[4959]: I0121 14:27:49.554631 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4lb8q_0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40/cert-manager-webhook/0.log" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.160828 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.267834 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9662k\" (UniqueName: \"kubernetes.io/projected/547ca3d1-b634-4985-ae1b-da2dd67f1659-kube-api-access-9662k\") pod \"547ca3d1-b634-4985-ae1b-da2dd67f1659\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.268013 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547ca3d1-b634-4985-ae1b-da2dd67f1659-host\") pod \"547ca3d1-b634-4985-ae1b-da2dd67f1659\" (UID: \"547ca3d1-b634-4985-ae1b-da2dd67f1659\") " Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.268895 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/547ca3d1-b634-4985-ae1b-da2dd67f1659-host" (OuterVolumeSpecName: "host") pod "547ca3d1-b634-4985-ae1b-da2dd67f1659" (UID: "547ca3d1-b634-4985-ae1b-da2dd67f1659"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.285137 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547ca3d1-b634-4985-ae1b-da2dd67f1659-kube-api-access-9662k" (OuterVolumeSpecName: "kube-api-access-9662k") pod "547ca3d1-b634-4985-ae1b-da2dd67f1659" (UID: "547ca3d1-b634-4985-ae1b-da2dd67f1659"). InnerVolumeSpecName "kube-api-access-9662k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.371618 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9662k\" (UniqueName: \"kubernetes.io/projected/547ca3d1-b634-4985-ae1b-da2dd67f1659-kube-api-access-9662k\") on node \"crc\" DevicePath \"\"" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.371915 4959 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/547ca3d1-b634-4985-ae1b-da2dd67f1659-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.993050 4959 scope.go:117] "RemoveContainer" containerID="79c421b6de6796b29a00d57425da26dcb9412f3eeefbdcf90d30fb0af74ebff8" Jan 21 14:27:50 crc kubenswrapper[4959]: I0121 14:27:50.993105 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/crc-debug-5p4vl" Jan 21 14:27:51 crc kubenswrapper[4959]: I0121 14:27:51.303689 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547ca3d1-b634-4985-ae1b-da2dd67f1659" path="/var/lib/kubelet/pods/547ca3d1-b634-4985-ae1b-da2dd67f1659/volumes" Jan 21 14:27:55 crc kubenswrapper[4959]: I0121 14:27:55.332330 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qwnmg_9616773b-3c4f-4141-871b-0d35828d0d52/nmstate-console-plugin/0.log" Jan 21 14:27:55 crc kubenswrapper[4959]: I0121 14:27:55.376982 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jr2mn_e5a1db10-de2f-423d-a482-087eb1eaf3d0/nmstate-handler/0.log" Jan 21 14:27:55 crc kubenswrapper[4959]: I0121 14:27:55.386729 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-sspgk_eb1df3cf-b716-4606-8f5b-fb2f1631e5fa/nmstate-metrics/0.log" Jan 21 14:27:55 crc kubenswrapper[4959]: I0121 14:27:55.396244 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-sspgk_eb1df3cf-b716-4606-8f5b-fb2f1631e5fa/kube-rbac-proxy/0.log" Jan 21 14:27:55 crc kubenswrapper[4959]: I0121 14:27:55.409715 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-6wfg2_3640732f-6cfa-4b56-a153-bfdc00a70169/nmstate-operator/0.log" Jan 21 14:27:55 crc kubenswrapper[4959]: I0121 14:27:55.421834 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-g5bl4_62025af4-bb62-4cbd-a420-77ce0cbea9ff/nmstate-webhook/0.log" Jan 21 14:28:07 crc kubenswrapper[4959]: I0121 14:28:07.654844 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sfwv8_40b8577e-ef7c-4aaa-abb5-fca4b4ea2173/controller/0.log" Jan 21 14:28:07 crc kubenswrapper[4959]: I0121 14:28:07.664030 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sfwv8_40b8577e-ef7c-4aaa-abb5-fca4b4ea2173/kube-rbac-proxy/0.log" Jan 21 14:28:07 crc kubenswrapper[4959]: I0121 14:28:07.693350 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/controller/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.261761 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/frr/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.271515 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/reloader/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.287466 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/frr-metrics/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.306531 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/kube-rbac-proxy/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.316959 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/kube-rbac-proxy-frr/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.328167 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-frr-files/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.340634 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-reloader/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.351738 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-metrics/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.365844 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wn2jb_8c501f4c-58de-43a4-80c2-5268f10bca20/frr-k8s-webhook-server/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.400958 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-55559dddc4-94kkl_6dd40e28-d4df-4cae-b104-773876261939/manager/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.414151 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8489dff5dc-5dz6x_6b055f38-5f56-4bb9-bfd6-25fb04003144/webhook-server/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.765849 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hdghc_8d3c9089-9424-4aca-87fb-20992ea6ed12/speaker/0.log" Jan 21 14:28:09 crc kubenswrapper[4959]: I0121 14:28:09.774308 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hdghc_8d3c9089-9424-4aca-87fb-20992ea6ed12/kube-rbac-proxy/0.log" Jan 21 14:28:14 crc kubenswrapper[4959]: I0121 14:28:14.321397 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878_00065f84-3765-45af-b9ee-9b8b83ebc1b8/extract/0.log" Jan 21 14:28:14 crc kubenswrapper[4959]: I0121 14:28:14.330055 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878_00065f84-3765-45af-b9ee-9b8b83ebc1b8/util/0.log" Jan 21 14:28:14 crc kubenswrapper[4959]: I0121 14:28:14.340234 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf6878_00065f84-3765-45af-b9ee-9b8b83ebc1b8/pull/0.log" Jan 21 14:28:14 crc kubenswrapper[4959]: I0121 14:28:14.440346 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c_3d19cedc-8035-48c5-9702-3670bcf397dc/extract/0.log" Jan 21 14:28:14 crc kubenswrapper[4959]: I0121 14:28:14.450824 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c_3d19cedc-8035-48c5-9702-3670bcf397dc/util/0.log" Jan 21 14:28:14 crc kubenswrapper[4959]: I0121 14:28:14.462432 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gfk7c_3d19cedc-8035-48c5-9702-3670bcf397dc/pull/0.log" Jan 21 14:28:15 crc kubenswrapper[4959]: I0121 14:28:15.086763 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2gp7k_a8d96f57-3e2b-4959-9205-7ccb1f90abf2/registry-server/0.log" Jan 21 14:28:15 crc kubenswrapper[4959]: I0121 14:28:15.092312 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2gp7k_a8d96f57-3e2b-4959-9205-7ccb1f90abf2/extract-utilities/0.log" Jan 21 14:28:15 crc kubenswrapper[4959]: I0121 14:28:15.103019 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2gp7k_a8d96f57-3e2b-4959-9205-7ccb1f90abf2/extract-content/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.006886 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtg48_726bd9b3-bca1-4956-9252-8c52bf6860b4/registry-server/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.013465 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtg48_726bd9b3-bca1-4956-9252-8c52bf6860b4/extract-utilities/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.022572 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtg48_726bd9b3-bca1-4956-9252-8c52bf6860b4/extract-content/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.040530 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-khptd_893eff4f-b820-41bf-9278-3c7daaeeb0b7/marketplace-operator/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.211498 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k5x8p_3d13b31b-9111-4e07-83c4-c55c579cb41f/registry-server/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.218184 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k5x8p_3d13b31b-9111-4e07-83c4-c55c579cb41f/extract-utilities/0.log" Jan 21 14:28:16 crc kubenswrapper[4959]: I0121 14:28:16.225340 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k5x8p_3d13b31b-9111-4e07-83c4-c55c579cb41f/extract-content/0.log" Jan 21 14:28:17 crc kubenswrapper[4959]: I0121 14:28:17.037267 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xrqzp_2bc62759-ca0b-47bf-8839-c23821a9124e/registry-server/0.log" Jan 21 14:28:17 crc kubenswrapper[4959]: I0121 14:28:17.059434 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xrqzp_2bc62759-ca0b-47bf-8839-c23821a9124e/extract-utilities/0.log" Jan 21 14:28:17 crc kubenswrapper[4959]: I0121 14:28:17.130299 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xrqzp_2bc62759-ca0b-47bf-8839-c23821a9124e/extract-content/0.log" Jan 21 14:28:51 crc kubenswrapper[4959]: I0121 14:28:51.379696 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:28:51 crc kubenswrapper[4959]: I0121 14:28:51.380354 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:29:21 crc kubenswrapper[4959]: I0121 14:29:21.380224 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:29:21 crc kubenswrapper[4959]: I0121 14:29:21.380862 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:29:39 crc kubenswrapper[4959]: I0121 14:29:39.777673 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sfwv8_40b8577e-ef7c-4aaa-abb5-fca4b4ea2173/controller/0.log" Jan 21 14:29:39 crc kubenswrapper[4959]: I0121 14:29:39.786040 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sfwv8_40b8577e-ef7c-4aaa-abb5-fca4b4ea2173/kube-rbac-proxy/0.log" Jan 21 14:29:39 crc kubenswrapper[4959]: I0121 14:29:39.807064 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/controller/0.log" Jan 21 14:29:39 crc kubenswrapper[4959]: I0121 14:29:39.970651 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7jsww_c485a4a8-e2c1-4f29-aec5-0712e70756da/cert-manager-controller/0.log" Jan 21 14:29:39 crc kubenswrapper[4959]: I0121 14:29:39.988848 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8mqxm_ab2d47a6-0c67-4286-bf53-c32a798cccb6/cert-manager-cainjector/0.log" Jan 21 14:29:40 crc kubenswrapper[4959]: I0121 14:29:40.003060 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4lb8q_0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40/cert-manager-webhook/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.082530 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/extract/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.095878 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/util/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.104815 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/pull/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.237693 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-f49mc_a588ba98-33be-46aa-a582-4403d3a09a95/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.340128 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-64kwb_cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.353987 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-d69ql_988f7f11-664f-4f70-9b38-2852dd3b17a0/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.363359 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/frr/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.384298 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/reloader/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.391296 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/frr-metrics/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.400763 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/kube-rbac-proxy/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.408564 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/kube-rbac-proxy-frr/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.416873 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-frr-files/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.427671 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-reloader/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.434340 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rwtbb_197db6ef-4bd0-4bf4-b9d8-c44565c03be6/cp-metrics/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.450884 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wn2jb_8c501f4c-58de-43a4-80c2-5268f10bca20/frr-k8s-webhook-server/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.453232 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-pp9dq_8075108b-d9e1-40d4-9e2e-4faa59061778/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.466058 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-n54x8_da20d161-5e78-4c3d-a021-75244caefb16/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.487912 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-55559dddc4-94kkl_6dd40e28-d4df-4cae-b104-773876261939/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.488752 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g4bs8_a6ef5ba7-019c-416f-9003-54c5ce70f01a/manager/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.499816 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8489dff5dc-5dz6x_6b055f38-5f56-4bb9-bfd6-25fb04003144/webhook-server/0.log" Jan 21 14:29:41 crc kubenswrapper[4959]: I0121 14:29:41.987639 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-96qjh_dd86c02d-b4ab-42e5-9a16-a968c0aeba96/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.001768 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-fzq84_ed8f3e55-7ed1-4794-8171-461cf3ebc132/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.016979 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hdghc_8d3c9089-9424-4aca-87fb-20992ea6ed12/speaker/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.026201 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hdghc_8d3c9089-9424-4aca-87fb-20992ea6ed12/kube-rbac-proxy/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.082562 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-dfzqp_49ec4962-8c60-4bd2-9ada-8f25cc21baa4/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.145887 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-c5fd576c9-gkv5c_fb0839da-0f44-43dd-a240-72c0f032f30a/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.187638 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5t88r_d3753491-e2ab-4cf4-b8be-7de464734343/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.239265 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-h4c6v_a24ac487-ea43-40fd-b6ea-cd7740cf80ce/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.315330 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-crgjd_ae0b11f6-2763-4884-b37b-ec8dc6548a79/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.325949 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kslqm_3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.341109 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8548fndp_db113188-8b44-43d6-8e79-8231fbfff914/manager/0.log" Jan 21 14:29:42 crc kubenswrapper[4959]: I0121 14:29:42.464054 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6c8559dcdb-l5dgc_106d5d1f-03fd-4706-96e9-f56588efc2ef/operator/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.066917 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7jsww_c485a4a8-e2c1-4f29-aec5-0712e70756da/cert-manager-controller/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.082240 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8mqxm_ab2d47a6-0c67-4286-bf53-c32a798cccb6/cert-manager-cainjector/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.093611 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4lb8q_0f485d8e-712b-4a6e-a9ce-ddc3cadb7a40/cert-manager-webhook/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.910129 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8hcw2_feec37c0-15ae-4bcf-af2c-1c1622f0edd4/control-plane-machine-set-operator/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.923140 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tlw47_e8fbacbf-6d70-4d37-a123-30151512cf5f/kube-rbac-proxy/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.935064 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tlw47_e8fbacbf-6d70-4d37-a123-30151512cf5f/machine-api-operator/0.log" Jan 21 14:29:43 crc kubenswrapper[4959]: I0121 14:29:43.992276 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bd5c98d7d-k5z9b_c8660b47-58d0-48c2-8359-ec471c30158a/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.005086 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v5rrt_a402b706-070e-44a8-b298-231e0e20af75/registry-server/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.057124 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6lsrf_1c5d42e4-5a3b-4cea-b0a7-3f334d801f22/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.088762 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-6wjxl_2776361f-f7a5-452f-b847-f1370993200b/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.108002 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fp76n_b5d1151c-e9f0-4bc3-b0da-b3df5470a149/operator/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.116399 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-hs86v_082d43b2-0714-47d3-9f71-9d386e89b56f/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.182059 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-qzds5_061f7370-4309-4e68-97f3-f57e9832939b/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.192301 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-pcxjt_9247c01e-fd0d-4fe6-8a9b-f50dec002cac/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.204717 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-6jp8j_747460d1-12de-4c88-b0d8-879ff7b62834/manager/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.809152 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/extract/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.820408 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/util/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.832304 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0ac230be47fa8d55343190bef8d7e8c3ee8ad29daa4731b439efa57aa7nqbx4_378bcded-d2db-4b72-bcdf-170b163dcdc4/pull/0.log" Jan 21 14:29:44 crc kubenswrapper[4959]: I0121 14:29:44.934251 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-f49mc_a588ba98-33be-46aa-a582-4403d3a09a95/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.003800 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-64kwb_cc5305f2-72f7-40a6-b0c9-d3abaf7ea1c7/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.020262 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-d69ql_988f7f11-664f-4f70-9b38-2852dd3b17a0/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.117023 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-pp9dq_8075108b-d9e1-40d4-9e2e-4faa59061778/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.126517 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-n54x8_da20d161-5e78-4c3d-a021-75244caefb16/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.142365 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-g4bs8_a6ef5ba7-019c-416f-9003-54c5ce70f01a/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.450975 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-96qjh_dd86c02d-b4ab-42e5-9a16-a968c0aeba96/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.464273 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-fzq84_ed8f3e55-7ed1-4794-8171-461cf3ebc132/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.541061 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-dfzqp_49ec4962-8c60-4bd2-9ada-8f25cc21baa4/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.592632 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-c5fd576c9-gkv5c_fb0839da-0f44-43dd-a240-72c0f032f30a/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.629421 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-5t88r_d3753491-e2ab-4cf4-b8be-7de464734343/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.702618 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-h4c6v_a24ac487-ea43-40fd-b6ea-cd7740cf80ce/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.712552 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qwnmg_9616773b-3c4f-4141-871b-0d35828d0d52/nmstate-console-plugin/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.736900 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jr2mn_e5a1db10-de2f-423d-a482-087eb1eaf3d0/nmstate-handler/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.747159 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-sspgk_eb1df3cf-b716-4606-8f5b-fb2f1631e5fa/nmstate-metrics/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.759807 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-sspgk_eb1df3cf-b716-4606-8f5b-fb2f1631e5fa/kube-rbac-proxy/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.778362 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-6wfg2_3640732f-6cfa-4b56-a153-bfdc00a70169/nmstate-operator/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.789721 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-g5bl4_62025af4-bb62-4cbd-a420-77ce0cbea9ff/nmstate-webhook/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.802179 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-crgjd_ae0b11f6-2763-4884-b37b-ec8dc6548a79/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.813677 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kslqm_3b03d0ff-dd8f-4d09-972a-a1acc9cf5c5d/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.828303 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8548fndp_db113188-8b44-43d6-8e79-8231fbfff914/manager/0.log" Jan 21 14:29:45 crc kubenswrapper[4959]: I0121 14:29:45.970694 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6c8559dcdb-l5dgc_106d5d1f-03fd-4706-96e9-f56588efc2ef/operator/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.275076 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bd5c98d7d-k5z9b_c8660b47-58d0-48c2-8359-ec471c30158a/manager/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.295878 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v5rrt_a402b706-070e-44a8-b298-231e0e20af75/registry-server/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.355933 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6lsrf_1c5d42e4-5a3b-4cea-b0a7-3f334d801f22/manager/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.387353 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-6wjxl_2776361f-f7a5-452f-b847-f1370993200b/manager/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.417587 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fp76n_b5d1151c-e9f0-4bc3-b0da-b3df5470a149/operator/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.427205 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-hs86v_082d43b2-0714-47d3-9f71-9d386e89b56f/manager/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.505521 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-qzds5_061f7370-4309-4e68-97f3-f57e9832939b/manager/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.516529 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-pcxjt_9247c01e-fd0d-4fe6-8a9b-f50dec002cac/manager/0.log" Jan 21 14:29:47 crc kubenswrapper[4959]: I0121 14:29:47.528255 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-6jp8j_747460d1-12de-4c88-b0d8-879ff7b62834/manager/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.146359 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/kube-multus-additional-cni-plugins/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.157010 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/egress-router-binary-copy/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.167660 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/cni-plugins/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.175076 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/bond-cni-plugin/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.180601 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/routeoverride-cni/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.193597 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/whereabouts-cni-bincopy/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.201606 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-tqwdg_342f1ad8-984e-41bd-acca-edad9366e45d/whereabouts-cni/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.232306 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-xgxkj_2d34f545-b950-49af-9300-d1eb2a1495eb/multus-admission-controller/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.241152 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-xgxkj_2d34f545-b950-49af-9300-d1eb2a1495eb/kube-rbac-proxy/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.310525 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/2.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.429506 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5zw9_867d68b2-3803-46b0-b974-62ec7ee89b49/kube-multus/3.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.464990 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-6mzgn_2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585/network-metrics-daemon/0.log" Jan 21 14:29:49 crc kubenswrapper[4959]: I0121 14:29:49.482037 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-6mzgn_2af1d4ef-d00b-4bf6-b2ff-77d30f5f5585/kube-rbac-proxy/0.log" Jan 21 14:29:51 crc kubenswrapper[4959]: I0121 14:29:51.379343 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:29:51 crc kubenswrapper[4959]: I0121 14:29:51.381085 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:29:51 crc kubenswrapper[4959]: I0121 14:29:51.381594 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 14:29:51 crc kubenswrapper[4959]: I0121 14:29:51.383240 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e93b9e76829d9c6d9a5450fccd9269d2e8abae5a4c912581303d59bc6014b9ee"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:29:51 crc kubenswrapper[4959]: I0121 14:29:51.383424 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://e93b9e76829d9c6d9a5450fccd9269d2e8abae5a4c912581303d59bc6014b9ee" gracePeriod=600 Jan 21 14:29:52 crc kubenswrapper[4959]: I0121 14:29:52.230482 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="e93b9e76829d9c6d9a5450fccd9269d2e8abae5a4c912581303d59bc6014b9ee" exitCode=0 Jan 21 14:29:52 crc kubenswrapper[4959]: I0121 14:29:52.230598 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"e93b9e76829d9c6d9a5450fccd9269d2e8abae5a4c912581303d59bc6014b9ee"} Jan 21 14:29:52 crc kubenswrapper[4959]: I0121 14:29:52.231071 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c"} Jan 21 14:29:52 crc kubenswrapper[4959]: I0121 14:29:52.231114 4959 scope.go:117] "RemoveContainer" containerID="6fe63d2079c3b86a3ce32370b8f7ad4a401890a9a31ffe41df2ed621cc1b9b3d" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.161587 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc"] Jan 21 14:30:00 crc kubenswrapper[4959]: E0121 14:30:00.162845 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547ca3d1-b634-4985-ae1b-da2dd67f1659" containerName="container-00" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.162866 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="547ca3d1-b634-4985-ae1b-da2dd67f1659" containerName="container-00" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.163132 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="547ca3d1-b634-4985-ae1b-da2dd67f1659" containerName="container-00" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.163962 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.180456 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc"] Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.203673 4959 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.204003 4959 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.279109 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e362431-fbe8-4cba-a85d-6fc63528ccf2-secret-volume\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.279300 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4tw\" (UniqueName: \"kubernetes.io/projected/8e362431-fbe8-4cba-a85d-6fc63528ccf2-kube-api-access-qv4tw\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.279355 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e362431-fbe8-4cba-a85d-6fc63528ccf2-config-volume\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.381462 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e362431-fbe8-4cba-a85d-6fc63528ccf2-secret-volume\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.381894 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv4tw\" (UniqueName: \"kubernetes.io/projected/8e362431-fbe8-4cba-a85d-6fc63528ccf2-kube-api-access-qv4tw\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.382022 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e362431-fbe8-4cba-a85d-6fc63528ccf2-config-volume\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.389273 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e362431-fbe8-4cba-a85d-6fc63528ccf2-config-volume\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.391013 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e362431-fbe8-4cba-a85d-6fc63528ccf2-secret-volume\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.407002 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv4tw\" (UniqueName: \"kubernetes.io/projected/8e362431-fbe8-4cba-a85d-6fc63528ccf2-kube-api-access-qv4tw\") pod \"collect-profiles-29483430-w6vhc\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:00 crc kubenswrapper[4959]: I0121 14:30:00.536584 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:01 crc kubenswrapper[4959]: I0121 14:30:01.056883 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc"] Jan 21 14:30:01 crc kubenswrapper[4959]: I0121 14:30:01.322730 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" event={"ID":"8e362431-fbe8-4cba-a85d-6fc63528ccf2","Type":"ContainerStarted","Data":"645610bfe2d75922a076439dffce1e439121011d3a495a81c01937737abb41c5"} Jan 21 14:30:01 crc kubenswrapper[4959]: I0121 14:30:01.323055 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" event={"ID":"8e362431-fbe8-4cba-a85d-6fc63528ccf2","Type":"ContainerStarted","Data":"577054bb1e09ac21aab9eb293d791311efd9cbe8bf2b2ed65d537b1e093cc838"} Jan 21 14:30:01 crc kubenswrapper[4959]: I0121 14:30:01.348836 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" podStartSLOduration=1.3488159 podStartE2EDuration="1.3488159s" podCreationTimestamp="2026-01-21 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:30:01.340275119 +0000 UTC m=+4862.303305662" watchObservedRunningTime="2026-01-21 14:30:01.3488159 +0000 UTC m=+4862.311846453" Jan 21 14:30:02 crc kubenswrapper[4959]: I0121 14:30:02.335980 4959 generic.go:334] "Generic (PLEG): container finished" podID="8e362431-fbe8-4cba-a85d-6fc63528ccf2" containerID="645610bfe2d75922a076439dffce1e439121011d3a495a81c01937737abb41c5" exitCode=0 Jan 21 14:30:02 crc kubenswrapper[4959]: I0121 14:30:02.336036 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" event={"ID":"8e362431-fbe8-4cba-a85d-6fc63528ccf2","Type":"ContainerDied","Data":"645610bfe2d75922a076439dffce1e439121011d3a495a81c01937737abb41c5"} Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.703004 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.761403 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e362431-fbe8-4cba-a85d-6fc63528ccf2-config-volume\") pod \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.761559 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e362431-fbe8-4cba-a85d-6fc63528ccf2-secret-volume\") pod \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.761810 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv4tw\" (UniqueName: \"kubernetes.io/projected/8e362431-fbe8-4cba-a85d-6fc63528ccf2-kube-api-access-qv4tw\") pod \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\" (UID: \"8e362431-fbe8-4cba-a85d-6fc63528ccf2\") " Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.762391 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e362431-fbe8-4cba-a85d-6fc63528ccf2-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e362431-fbe8-4cba-a85d-6fc63528ccf2" (UID: "8e362431-fbe8-4cba-a85d-6fc63528ccf2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.767850 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e362431-fbe8-4cba-a85d-6fc63528ccf2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e362431-fbe8-4cba-a85d-6fc63528ccf2" (UID: "8e362431-fbe8-4cba-a85d-6fc63528ccf2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.768390 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e362431-fbe8-4cba-a85d-6fc63528ccf2-kube-api-access-qv4tw" (OuterVolumeSpecName: "kube-api-access-qv4tw") pod "8e362431-fbe8-4cba-a85d-6fc63528ccf2" (UID: "8e362431-fbe8-4cba-a85d-6fc63528ccf2"). InnerVolumeSpecName "kube-api-access-qv4tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.864085 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv4tw\" (UniqueName: \"kubernetes.io/projected/8e362431-fbe8-4cba-a85d-6fc63528ccf2-kube-api-access-qv4tw\") on node \"crc\" DevicePath \"\"" Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.864156 4959 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e362431-fbe8-4cba-a85d-6fc63528ccf2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:30:03 crc kubenswrapper[4959]: I0121 14:30:03.864168 4959 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e362431-fbe8-4cba-a85d-6fc63528ccf2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:30:04 crc kubenswrapper[4959]: I0121 14:30:04.353237 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" event={"ID":"8e362431-fbe8-4cba-a85d-6fc63528ccf2","Type":"ContainerDied","Data":"577054bb1e09ac21aab9eb293d791311efd9cbe8bf2b2ed65d537b1e093cc838"} Jan 21 14:30:04 crc kubenswrapper[4959]: I0121 14:30:04.353281 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="577054bb1e09ac21aab9eb293d791311efd9cbe8bf2b2ed65d537b1e093cc838" Jan 21 14:30:04 crc kubenswrapper[4959]: I0121 14:30:04.353285 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-w6vhc" Jan 21 14:30:04 crc kubenswrapper[4959]: I0121 14:30:04.425387 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g"] Jan 21 14:30:04 crc kubenswrapper[4959]: I0121 14:30:04.434901 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-24z8g"] Jan 21 14:30:05 crc kubenswrapper[4959]: I0121 14:30:05.296079 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0219fad6-0737-4f2b-985d-63ada4eaf374" path="/var/lib/kubelet/pods/0219fad6-0737-4f2b-985d-63ada4eaf374/volumes" Jan 21 14:30:37 crc kubenswrapper[4959]: I0121 14:30:37.564419 4959 scope.go:117] "RemoveContainer" containerID="ac81e689d29d93be3221f088fab6fdf5ae1a6f690b16e6ecab1ee53abee0fc82" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.798148 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5cljv"] Jan 21 14:31:50 crc kubenswrapper[4959]: E0121 14:31:50.799746 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e362431-fbe8-4cba-a85d-6fc63528ccf2" containerName="collect-profiles" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.799765 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e362431-fbe8-4cba-a85d-6fc63528ccf2" containerName="collect-profiles" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.799971 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e362431-fbe8-4cba-a85d-6fc63528ccf2" containerName="collect-profiles" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.801344 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.814242 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cljv"] Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.909939 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8754j\" (UniqueName: \"kubernetes.io/projected/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-kube-api-access-8754j\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.910226 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-catalog-content\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.910300 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-utilities\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:50 crc kubenswrapper[4959]: I0121 14:31:50.998012 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jnt9f"] Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.000485 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.011526 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8754j\" (UniqueName: \"kubernetes.io/projected/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-kube-api-access-8754j\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.011666 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-catalog-content\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.011720 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-utilities\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.012130 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jnt9f"] Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.012181 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-catalog-content\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.012231 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-utilities\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.035544 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8754j\" (UniqueName: \"kubernetes.io/projected/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-kube-api-access-8754j\") pod \"certified-operators-5cljv\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.113252 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-catalog-content\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.113821 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrgj\" (UniqueName: \"kubernetes.io/projected/9c984e68-96af-402f-ada1-b3d673076af7-kube-api-access-btrgj\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.114006 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-utilities\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.137650 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.215851 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-catalog-content\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.216215 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btrgj\" (UniqueName: \"kubernetes.io/projected/9c984e68-96af-402f-ada1-b3d673076af7-kube-api-access-btrgj\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.216451 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-utilities\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.217229 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-utilities\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.217667 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-catalog-content\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.240836 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btrgj\" (UniqueName: \"kubernetes.io/projected/9c984e68-96af-402f-ada1-b3d673076af7-kube-api-access-btrgj\") pod \"redhat-operators-jnt9f\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.329984 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.380128 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.380229 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.717035 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cljv"] Jan 21 14:31:51 crc kubenswrapper[4959]: W0121 14:31:51.722067 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc37002d_945c_4a7b_b7dd_6f4a5abf9d9e.slice/crio-b234a2c8f34b0b47e32744adf52cbdebf5c1d7833cd90f962f9065d78509764a WatchSource:0}: Error finding container b234a2c8f34b0b47e32744adf52cbdebf5c1d7833cd90f962f9065d78509764a: Status 404 returned error can't find the container with id b234a2c8f34b0b47e32744adf52cbdebf5c1d7833cd90f962f9065d78509764a Jan 21 14:31:51 crc kubenswrapper[4959]: W0121 14:31:51.881161 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c984e68_96af_402f_ada1_b3d673076af7.slice/crio-d6b31cd018517efb8530949ae67c8745c0b61bae69368df59740022edf11eb2f WatchSource:0}: Error finding container d6b31cd018517efb8530949ae67c8745c0b61bae69368df59740022edf11eb2f: Status 404 returned error can't find the container with id d6b31cd018517efb8530949ae67c8745c0b61bae69368df59740022edf11eb2f Jan 21 14:31:51 crc kubenswrapper[4959]: I0121 14:31:51.905417 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jnt9f"] Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.402787 4959 generic.go:334] "Generic (PLEG): container finished" podID="9c984e68-96af-402f-ada1-b3d673076af7" containerID="29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0" exitCode=0 Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.402855 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerDied","Data":"29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0"} Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.403168 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerStarted","Data":"d6b31cd018517efb8530949ae67c8745c0b61bae69368df59740022edf11eb2f"} Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.404626 4959 generic.go:334] "Generic (PLEG): container finished" podID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerID="ead43ab31ae2193e2ab5eb11caabee5595835f6493817fba76aaad4556ffef6d" exitCode=0 Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.404665 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cljv" event={"ID":"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e","Type":"ContainerDied","Data":"ead43ab31ae2193e2ab5eb11caabee5595835f6493817fba76aaad4556ffef6d"} Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.404720 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cljv" event={"ID":"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e","Type":"ContainerStarted","Data":"b234a2c8f34b0b47e32744adf52cbdebf5c1d7833cd90f962f9065d78509764a"} Jan 21 14:31:52 crc kubenswrapper[4959]: I0121 14:31:52.405707 4959 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.404013 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bj4qt"] Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.406304 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.415272 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj4qt"] Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.481628 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czb6\" (UniqueName: \"kubernetes.io/projected/be0b60f4-e2dd-4558-8fe5-af196fcc528a-kube-api-access-8czb6\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.481687 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-catalog-content\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.481732 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-utilities\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.584215 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8czb6\" (UniqueName: \"kubernetes.io/projected/be0b60f4-e2dd-4558-8fe5-af196fcc528a-kube-api-access-8czb6\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.584278 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-catalog-content\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.584330 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-utilities\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.584777 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-catalog-content\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.584881 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-utilities\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.610211 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8czb6\" (UniqueName: \"kubernetes.io/projected/be0b60f4-e2dd-4558-8fe5-af196fcc528a-kube-api-access-8czb6\") pod \"redhat-marketplace-bj4qt\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:53 crc kubenswrapper[4959]: I0121 14:31:53.747211 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:31:54 crc kubenswrapper[4959]: W0121 14:31:54.244740 4959 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0b60f4_e2dd_4558_8fe5_af196fcc528a.slice/crio-4422930d12ac6366c095fee4086036341dcc73b5120226877ddde29d284c6e1d WatchSource:0}: Error finding container 4422930d12ac6366c095fee4086036341dcc73b5120226877ddde29d284c6e1d: Status 404 returned error can't find the container with id 4422930d12ac6366c095fee4086036341dcc73b5120226877ddde29d284c6e1d Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.248672 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj4qt"] Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.422954 4959 generic.go:334] "Generic (PLEG): container finished" podID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerID="49ce4a866caacca9373058d41a026f46037143bf6f0859c8e60707cce4e84000" exitCode=0 Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.423027 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerDied","Data":"49ce4a866caacca9373058d41a026f46037143bf6f0859c8e60707cce4e84000"} Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.423063 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerStarted","Data":"4422930d12ac6366c095fee4086036341dcc73b5120226877ddde29d284c6e1d"} Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.426602 4959 generic.go:334] "Generic (PLEG): container finished" podID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerID="5baba32726e0ea3351bc03b02c61caba774e604990dba76ac080578f6a25603f" exitCode=0 Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.426711 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cljv" event={"ID":"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e","Type":"ContainerDied","Data":"5baba32726e0ea3351bc03b02c61caba774e604990dba76ac080578f6a25603f"} Jan 21 14:31:54 crc kubenswrapper[4959]: I0121 14:31:54.430138 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerStarted","Data":"427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887"} Jan 21 14:31:55 crc kubenswrapper[4959]: I0121 14:31:55.443162 4959 generic.go:334] "Generic (PLEG): container finished" podID="9c984e68-96af-402f-ada1-b3d673076af7" containerID="427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887" exitCode=0 Jan 21 14:31:55 crc kubenswrapper[4959]: I0121 14:31:55.443310 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerDied","Data":"427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887"} Jan 21 14:31:56 crc kubenswrapper[4959]: I0121 14:31:56.454571 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerStarted","Data":"f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5"} Jan 21 14:31:56 crc kubenswrapper[4959]: I0121 14:31:56.457237 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerStarted","Data":"f4b8cf2da39a886c5b41f00fb6f3dd3cb87b52ff2ca752a26cc11d6cbc3123ed"} Jan 21 14:31:56 crc kubenswrapper[4959]: I0121 14:31:56.459920 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cljv" event={"ID":"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e","Type":"ContainerStarted","Data":"c8f75b45e4468bc1b78a48629b515999c041c21eb076251a679ec79068ce4cdd"} Jan 21 14:31:56 crc kubenswrapper[4959]: I0121 14:31:56.482206 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jnt9f" podStartSLOduration=2.785203041 podStartE2EDuration="6.482183704s" podCreationTimestamp="2026-01-21 14:31:50 +0000 UTC" firstStartedPulling="2026-01-21 14:31:52.404454874 +0000 UTC m=+4973.367485417" lastFinishedPulling="2026-01-21 14:31:56.101435527 +0000 UTC m=+4977.064466080" observedRunningTime="2026-01-21 14:31:56.473835713 +0000 UTC m=+4977.436866256" watchObservedRunningTime="2026-01-21 14:31:56.482183704 +0000 UTC m=+4977.445214247" Jan 21 14:31:56 crc kubenswrapper[4959]: I0121 14:31:56.503512 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5cljv" podStartSLOduration=3.8438196700000002 podStartE2EDuration="6.50346643s" podCreationTimestamp="2026-01-21 14:31:50 +0000 UTC" firstStartedPulling="2026-01-21 14:31:52.405890843 +0000 UTC m=+4973.368921386" lastFinishedPulling="2026-01-21 14:31:55.065537603 +0000 UTC m=+4976.028568146" observedRunningTime="2026-01-21 14:31:56.49633365 +0000 UTC m=+4977.459364213" watchObservedRunningTime="2026-01-21 14:31:56.50346643 +0000 UTC m=+4977.466496973" Jan 21 14:31:57 crc kubenswrapper[4959]: I0121 14:31:57.470184 4959 generic.go:334] "Generic (PLEG): container finished" podID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerID="f4b8cf2da39a886c5b41f00fb6f3dd3cb87b52ff2ca752a26cc11d6cbc3123ed" exitCode=0 Jan 21 14:31:57 crc kubenswrapper[4959]: I0121 14:31:57.470484 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerDied","Data":"f4b8cf2da39a886c5b41f00fb6f3dd3cb87b52ff2ca752a26cc11d6cbc3123ed"} Jan 21 14:32:01 crc kubenswrapper[4959]: I0121 14:32:01.138055 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:32:01 crc kubenswrapper[4959]: I0121 14:32:01.146598 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:32:01 crc kubenswrapper[4959]: I0121 14:32:01.197405 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:32:01 crc kubenswrapper[4959]: I0121 14:32:01.331121 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:32:01 crc kubenswrapper[4959]: I0121 14:32:01.331170 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:32:01 crc kubenswrapper[4959]: I0121 14:32:01.554799 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:32:02 crc kubenswrapper[4959]: I0121 14:32:02.379458 4959 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jnt9f" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="registry-server" probeResult="failure" output=< Jan 21 14:32:02 crc kubenswrapper[4959]: timeout: failed to connect service ":50051" within 1s Jan 21 14:32:02 crc kubenswrapper[4959]: > Jan 21 14:32:02 crc kubenswrapper[4959]: I0121 14:32:02.786609 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cljv"] Jan 21 14:32:03 crc kubenswrapper[4959]: I0121 14:32:03.528669 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerStarted","Data":"13139d327b9fb65356722ea7fbbd7fbf1bca0df9a6795a2beaba0759f2f3f798"} Jan 21 14:32:04 crc kubenswrapper[4959]: I0121 14:32:04.541303 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5cljv" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="registry-server" containerID="cri-o://c8f75b45e4468bc1b78a48629b515999c041c21eb076251a679ec79068ce4cdd" gracePeriod=2 Jan 21 14:32:06 crc kubenswrapper[4959]: I0121 14:32:06.573180 4959 generic.go:334] "Generic (PLEG): container finished" podID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerID="c8f75b45e4468bc1b78a48629b515999c041c21eb076251a679ec79068ce4cdd" exitCode=0 Jan 21 14:32:06 crc kubenswrapper[4959]: I0121 14:32:06.573282 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cljv" event={"ID":"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e","Type":"ContainerDied","Data":"c8f75b45e4468bc1b78a48629b515999c041c21eb076251a679ec79068ce4cdd"} Jan 21 14:32:06 crc kubenswrapper[4959]: I0121 14:32:06.594476 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bj4qt" podStartSLOduration=8.778934353 podStartE2EDuration="13.594459569s" podCreationTimestamp="2026-01-21 14:31:53 +0000 UTC" firstStartedPulling="2026-01-21 14:31:54.425429512 +0000 UTC m=+4975.388460055" lastFinishedPulling="2026-01-21 14:31:59.240954728 +0000 UTC m=+4980.203985271" observedRunningTime="2026-01-21 14:32:06.59277148 +0000 UTC m=+4987.555802023" watchObservedRunningTime="2026-01-21 14:32:06.594459569 +0000 UTC m=+4987.557490112" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.618928 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cljv" event={"ID":"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e","Type":"ContainerDied","Data":"b234a2c8f34b0b47e32744adf52cbdebf5c1d7833cd90f962f9065d78509764a"} Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.619216 4959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b234a2c8f34b0b47e32744adf52cbdebf5c1d7833cd90f962f9065d78509764a" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.628138 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.695545 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8754j\" (UniqueName: \"kubernetes.io/projected/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-kube-api-access-8754j\") pod \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.695593 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-utilities\") pod \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.695731 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-catalog-content\") pod \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\" (UID: \"bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e\") " Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.696472 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-utilities" (OuterVolumeSpecName: "utilities") pod "bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" (UID: "bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.703374 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-kube-api-access-8754j" (OuterVolumeSpecName: "kube-api-access-8754j") pod "bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" (UID: "bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e"). InnerVolumeSpecName "kube-api-access-8754j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.745248 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" (UID: "bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.797569 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8754j\" (UniqueName: \"kubernetes.io/projected/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-kube-api-access-8754j\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.797645 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:07 crc kubenswrapper[4959]: I0121 14:32:07.797658 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:08 crc kubenswrapper[4959]: I0121 14:32:08.627409 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cljv" Jan 21 14:32:08 crc kubenswrapper[4959]: I0121 14:32:08.673969 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cljv"] Jan 21 14:32:08 crc kubenswrapper[4959]: I0121 14:32:08.690451 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5cljv"] Jan 21 14:32:09 crc kubenswrapper[4959]: I0121 14:32:09.297361 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" path="/var/lib/kubelet/pods/bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e/volumes" Jan 21 14:32:11 crc kubenswrapper[4959]: I0121 14:32:11.372481 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:32:11 crc kubenswrapper[4959]: I0121 14:32:11.418142 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:32:11 crc kubenswrapper[4959]: I0121 14:32:11.874789 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jnt9f"] Jan 21 14:32:12 crc kubenswrapper[4959]: I0121 14:32:12.658169 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jnt9f" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="registry-server" containerID="cri-o://f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5" gracePeriod=2 Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.121757 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.320529 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btrgj\" (UniqueName: \"kubernetes.io/projected/9c984e68-96af-402f-ada1-b3d673076af7-kube-api-access-btrgj\") pod \"9c984e68-96af-402f-ada1-b3d673076af7\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.320655 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-catalog-content\") pod \"9c984e68-96af-402f-ada1-b3d673076af7\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.320802 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-utilities\") pod \"9c984e68-96af-402f-ada1-b3d673076af7\" (UID: \"9c984e68-96af-402f-ada1-b3d673076af7\") " Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.321924 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-utilities" (OuterVolumeSpecName: "utilities") pod "9c984e68-96af-402f-ada1-b3d673076af7" (UID: "9c984e68-96af-402f-ada1-b3d673076af7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.328335 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c984e68-96af-402f-ada1-b3d673076af7-kube-api-access-btrgj" (OuterVolumeSpecName: "kube-api-access-btrgj") pod "9c984e68-96af-402f-ada1-b3d673076af7" (UID: "9c984e68-96af-402f-ada1-b3d673076af7"). InnerVolumeSpecName "kube-api-access-btrgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.424019 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btrgj\" (UniqueName: \"kubernetes.io/projected/9c984e68-96af-402f-ada1-b3d673076af7-kube-api-access-btrgj\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.424057 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.444651 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c984e68-96af-402f-ada1-b3d673076af7" (UID: "9c984e68-96af-402f-ada1-b3d673076af7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.526914 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c984e68-96af-402f-ada1-b3d673076af7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.669938 4959 generic.go:334] "Generic (PLEG): container finished" podID="9c984e68-96af-402f-ada1-b3d673076af7" containerID="f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5" exitCode=0 Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.670021 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerDied","Data":"f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5"} Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.670084 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jnt9f" event={"ID":"9c984e68-96af-402f-ada1-b3d673076af7","Type":"ContainerDied","Data":"d6b31cd018517efb8530949ae67c8745c0b61bae69368df59740022edf11eb2f"} Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.670138 4959 scope.go:117] "RemoveContainer" containerID="f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.670920 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jnt9f" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.692380 4959 scope.go:117] "RemoveContainer" containerID="427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.719283 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jnt9f"] Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.730303 4959 scope.go:117] "RemoveContainer" containerID="29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.731062 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jnt9f"] Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.747773 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.747828 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.755109 4959 scope.go:117] "RemoveContainer" containerID="f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5" Jan 21 14:32:13 crc kubenswrapper[4959]: E0121 14:32:13.755880 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5\": container with ID starting with f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5 not found: ID does not exist" containerID="f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.755915 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5"} err="failed to get container status \"f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5\": rpc error: code = NotFound desc = could not find container \"f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5\": container with ID starting with f564f7104c9dae70ffd8568ed9c50f6cced9c55526cb5275ebadf57aab0839b5 not found: ID does not exist" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.755935 4959 scope.go:117] "RemoveContainer" containerID="427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887" Jan 21 14:32:13 crc kubenswrapper[4959]: E0121 14:32:13.756722 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887\": container with ID starting with 427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887 not found: ID does not exist" containerID="427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.756748 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887"} err="failed to get container status \"427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887\": rpc error: code = NotFound desc = could not find container \"427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887\": container with ID starting with 427f2b60a8b0349501554c86e361112a235531d2c1ac16f1132a8b4c80c82887 not found: ID does not exist" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.756762 4959 scope.go:117] "RemoveContainer" containerID="29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0" Jan 21 14:32:13 crc kubenswrapper[4959]: E0121 14:32:13.756968 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0\": container with ID starting with 29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0 not found: ID does not exist" containerID="29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.756984 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0"} err="failed to get container status \"29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0\": rpc error: code = NotFound desc = could not find container \"29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0\": container with ID starting with 29f1c3aa8b19155982bd6bbae6d3950958dde7fe5f0614d13e3b05cc653ea0c0 not found: ID does not exist" Jan 21 14:32:13 crc kubenswrapper[4959]: I0121 14:32:13.801425 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:32:14 crc kubenswrapper[4959]: I0121 14:32:14.729421 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:32:15 crc kubenswrapper[4959]: I0121 14:32:15.301305 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c984e68-96af-402f-ada1-b3d673076af7" path="/var/lib/kubelet/pods/9c984e68-96af-402f-ada1-b3d673076af7/volumes" Jan 21 14:32:18 crc kubenswrapper[4959]: I0121 14:32:18.482680 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj4qt"] Jan 21 14:32:18 crc kubenswrapper[4959]: I0121 14:32:18.485312 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bj4qt" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="registry-server" containerID="cri-o://13139d327b9fb65356722ea7fbbd7fbf1bca0df9a6795a2beaba0759f2f3f798" gracePeriod=2 Jan 21 14:32:18 crc kubenswrapper[4959]: I0121 14:32:18.738088 4959 generic.go:334] "Generic (PLEG): container finished" podID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerID="13139d327b9fb65356722ea7fbbd7fbf1bca0df9a6795a2beaba0759f2f3f798" exitCode=0 Jan 21 14:32:18 crc kubenswrapper[4959]: I0121 14:32:18.738205 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerDied","Data":"13139d327b9fb65356722ea7fbbd7fbf1bca0df9a6795a2beaba0759f2f3f798"} Jan 21 14:32:18 crc kubenswrapper[4959]: I0121 14:32:18.961279 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.149959 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-catalog-content\") pod \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.150025 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8czb6\" (UniqueName: \"kubernetes.io/projected/be0b60f4-e2dd-4558-8fe5-af196fcc528a-kube-api-access-8czb6\") pod \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.150284 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-utilities\") pod \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\" (UID: \"be0b60f4-e2dd-4558-8fe5-af196fcc528a\") " Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.150940 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-utilities" (OuterVolumeSpecName: "utilities") pod "be0b60f4-e2dd-4558-8fe5-af196fcc528a" (UID: "be0b60f4-e2dd-4558-8fe5-af196fcc528a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.156041 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0b60f4-e2dd-4558-8fe5-af196fcc528a-kube-api-access-8czb6" (OuterVolumeSpecName: "kube-api-access-8czb6") pod "be0b60f4-e2dd-4558-8fe5-af196fcc528a" (UID: "be0b60f4-e2dd-4558-8fe5-af196fcc528a"). InnerVolumeSpecName "kube-api-access-8czb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.176997 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be0b60f4-e2dd-4558-8fe5-af196fcc528a" (UID: "be0b60f4-e2dd-4558-8fe5-af196fcc528a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.253034 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.253667 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0b60f4-e2dd-4558-8fe5-af196fcc528a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.253750 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8czb6\" (UniqueName: \"kubernetes.io/projected/be0b60f4-e2dd-4558-8fe5-af196fcc528a-kube-api-access-8czb6\") on node \"crc\" DevicePath \"\"" Jan 21 14:32:19 crc kubenswrapper[4959]: E0121 14:32:19.526668 4959 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0b60f4_e2dd_4558_8fe5_af196fcc528a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0b60f4_e2dd_4558_8fe5_af196fcc528a.slice/crio-4422930d12ac6366c095fee4086036341dcc73b5120226877ddde29d284c6e1d\": RecentStats: unable to find data in memory cache]" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.753196 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bj4qt" event={"ID":"be0b60f4-e2dd-4558-8fe5-af196fcc528a","Type":"ContainerDied","Data":"4422930d12ac6366c095fee4086036341dcc73b5120226877ddde29d284c6e1d"} Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.753243 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bj4qt" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.753260 4959 scope.go:117] "RemoveContainer" containerID="13139d327b9fb65356722ea7fbbd7fbf1bca0df9a6795a2beaba0759f2f3f798" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.776906 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj4qt"] Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.780242 4959 scope.go:117] "RemoveContainer" containerID="f4b8cf2da39a886c5b41f00fb6f3dd3cb87b52ff2ca752a26cc11d6cbc3123ed" Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.797487 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bj4qt"] Jan 21 14:32:19 crc kubenswrapper[4959]: I0121 14:32:19.806937 4959 scope.go:117] "RemoveContainer" containerID="49ce4a866caacca9373058d41a026f46037143bf6f0859c8e60707cce4e84000" Jan 21 14:32:21 crc kubenswrapper[4959]: I0121 14:32:21.298514 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" path="/var/lib/kubelet/pods/be0b60f4-e2dd-4558-8fe5-af196fcc528a/volumes" Jan 21 14:32:21 crc kubenswrapper[4959]: I0121 14:32:21.379530 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:32:21 crc kubenswrapper[4959]: I0121 14:32:21.379596 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:32:37 crc kubenswrapper[4959]: I0121 14:32:37.405762 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-77cdb9766f-rtq4k" podUID="3913238c-8062-4839-9106-ce99f45ccadf" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.872463 4959 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rgl6b"] Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873635 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873651 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873669 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="extract-content" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873676 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="extract-content" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873725 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="extract-content" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873732 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="extract-content" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873745 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="extract-content" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873927 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="extract-content" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873946 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="extract-utilities" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873952 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="extract-utilities" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873960 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="extract-utilities" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873966 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="extract-utilities" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873973 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="extract-utilities" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873979 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="extract-utilities" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.873992 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.873998 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: E0121 14:32:45.874012 4959 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.874018 4959 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.874250 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0b60f4-e2dd-4558-8fe5-af196fcc528a" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.874268 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc37002d-945c-4a7b-b7dd-6f4a5abf9d9e" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.874282 4959 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c984e68-96af-402f-ada1-b3d673076af7" containerName="registry-server" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.875690 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.884769 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgl6b"] Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.954294 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-utilities\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.954574 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknfq\" (UniqueName: \"kubernetes.io/projected/192aee78-47b0-4910-bb04-feeb414190e2-kube-api-access-zknfq\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:45 crc kubenswrapper[4959]: I0121 14:32:45.954686 4959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-catalog-content\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.056351 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-catalog-content\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.056449 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-utilities\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.056524 4959 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zknfq\" (UniqueName: \"kubernetes.io/projected/192aee78-47b0-4910-bb04-feeb414190e2-kube-api-access-zknfq\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.057345 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-catalog-content\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.057569 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-utilities\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.077715 4959 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zknfq\" (UniqueName: \"kubernetes.io/projected/192aee78-47b0-4910-bb04-feeb414190e2-kube-api-access-zknfq\") pod \"community-operators-rgl6b\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.207782 4959 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:46 crc kubenswrapper[4959]: I0121 14:32:46.791408 4959 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgl6b"] Jan 21 14:32:47 crc kubenswrapper[4959]: I0121 14:32:47.000052 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgl6b" event={"ID":"192aee78-47b0-4910-bb04-feeb414190e2","Type":"ContainerStarted","Data":"0ada4ecf454419013a88b0cb231e4fda22fd83b1ac54771206936ee15677c861"} Jan 21 14:32:48 crc kubenswrapper[4959]: I0121 14:32:48.095409 4959 generic.go:334] "Generic (PLEG): container finished" podID="192aee78-47b0-4910-bb04-feeb414190e2" containerID="28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5" exitCode=0 Jan 21 14:32:48 crc kubenswrapper[4959]: I0121 14:32:48.095520 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgl6b" event={"ID":"192aee78-47b0-4910-bb04-feeb414190e2","Type":"ContainerDied","Data":"28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5"} Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.145355 4959 generic.go:334] "Generic (PLEG): container finished" podID="192aee78-47b0-4910-bb04-feeb414190e2" containerID="90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f" exitCode=0 Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.145488 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgl6b" event={"ID":"192aee78-47b0-4910-bb04-feeb414190e2","Type":"ContainerDied","Data":"90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f"} Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.379640 4959 patch_prober.go:28] interesting pod/machine-config-daemon-wwkrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.379942 4959 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.379991 4959 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.380703 4959 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c"} pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:32:51 crc kubenswrapper[4959]: I0121 14:32:51.380767 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerName="machine-config-daemon" containerID="cri-o://fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" gracePeriod=600 Jan 21 14:32:51 crc kubenswrapper[4959]: E0121 14:32:51.512023 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:32:52 crc kubenswrapper[4959]: I0121 14:32:52.167031 4959 generic.go:334] "Generic (PLEG): container finished" podID="00d99d89-7cdc-410d-b2f3-347be806f79a" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" exitCode=0 Jan 21 14:32:52 crc kubenswrapper[4959]: I0121 14:32:52.167147 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerDied","Data":"fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c"} Jan 21 14:32:52 crc kubenswrapper[4959]: I0121 14:32:52.167446 4959 scope.go:117] "RemoveContainer" containerID="e93b9e76829d9c6d9a5450fccd9269d2e8abae5a4c912581303d59bc6014b9ee" Jan 21 14:32:52 crc kubenswrapper[4959]: I0121 14:32:52.168438 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:32:52 crc kubenswrapper[4959]: E0121 14:32:52.168848 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:32:52 crc kubenswrapper[4959]: I0121 14:32:52.172312 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgl6b" event={"ID":"192aee78-47b0-4910-bb04-feeb414190e2","Type":"ContainerStarted","Data":"67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8"} Jan 21 14:32:52 crc kubenswrapper[4959]: I0121 14:32:52.219467 4959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rgl6b" podStartSLOduration=3.5891813040000002 podStartE2EDuration="7.219443278s" podCreationTimestamp="2026-01-21 14:32:45 +0000 UTC" firstStartedPulling="2026-01-21 14:32:48.100228191 +0000 UTC m=+5029.063258734" lastFinishedPulling="2026-01-21 14:32:51.730490165 +0000 UTC m=+5032.693520708" observedRunningTime="2026-01-21 14:32:52.215588394 +0000 UTC m=+5033.178618947" watchObservedRunningTime="2026-01-21 14:32:52.219443278 +0000 UTC m=+5033.182473831" Jan 21 14:32:56 crc kubenswrapper[4959]: I0121 14:32:56.208516 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:56 crc kubenswrapper[4959]: I0121 14:32:56.210198 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:56 crc kubenswrapper[4959]: I0121 14:32:56.249540 4959 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:57 crc kubenswrapper[4959]: I0121 14:32:57.262117 4959 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:57 crc kubenswrapper[4959]: I0121 14:32:57.314408 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgl6b"] Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.237001 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rgl6b" podUID="192aee78-47b0-4910-bb04-feeb414190e2" containerName="registry-server" containerID="cri-o://67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8" gracePeriod=2 Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.752819 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.938390 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zknfq\" (UniqueName: \"kubernetes.io/projected/192aee78-47b0-4910-bb04-feeb414190e2-kube-api-access-zknfq\") pod \"192aee78-47b0-4910-bb04-feeb414190e2\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.938526 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-catalog-content\") pod \"192aee78-47b0-4910-bb04-feeb414190e2\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.938697 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-utilities\") pod \"192aee78-47b0-4910-bb04-feeb414190e2\" (UID: \"192aee78-47b0-4910-bb04-feeb414190e2\") " Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.940181 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-utilities" (OuterVolumeSpecName: "utilities") pod "192aee78-47b0-4910-bb04-feeb414190e2" (UID: "192aee78-47b0-4910-bb04-feeb414190e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:32:59 crc kubenswrapper[4959]: I0121 14:32:59.945365 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192aee78-47b0-4910-bb04-feeb414190e2-kube-api-access-zknfq" (OuterVolumeSpecName: "kube-api-access-zknfq") pod "192aee78-47b0-4910-bb04-feeb414190e2" (UID: "192aee78-47b0-4910-bb04-feeb414190e2"). InnerVolumeSpecName "kube-api-access-zknfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.001088 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "192aee78-47b0-4910-bb04-feeb414190e2" (UID: "192aee78-47b0-4910-bb04-feeb414190e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.040892 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zknfq\" (UniqueName: \"kubernetes.io/projected/192aee78-47b0-4910-bb04-feeb414190e2-kube-api-access-zknfq\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.040924 4959 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.040940 4959 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192aee78-47b0-4910-bb04-feeb414190e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.247703 4959 generic.go:334] "Generic (PLEG): container finished" podID="192aee78-47b0-4910-bb04-feeb414190e2" containerID="67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8" exitCode=0 Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.247757 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgl6b" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.247772 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgl6b" event={"ID":"192aee78-47b0-4910-bb04-feeb414190e2","Type":"ContainerDied","Data":"67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8"} Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.247802 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgl6b" event={"ID":"192aee78-47b0-4910-bb04-feeb414190e2","Type":"ContainerDied","Data":"0ada4ecf454419013a88b0cb231e4fda22fd83b1ac54771206936ee15677c861"} Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.247822 4959 scope.go:117] "RemoveContainer" containerID="67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.270567 4959 scope.go:117] "RemoveContainer" containerID="90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.285907 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgl6b"] Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.295325 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rgl6b"] Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.306947 4959 scope.go:117] "RemoveContainer" containerID="28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.354839 4959 scope.go:117] "RemoveContainer" containerID="67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8" Jan 21 14:33:00 crc kubenswrapper[4959]: E0121 14:33:00.355606 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8\": container with ID starting with 67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8 not found: ID does not exist" containerID="67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.355738 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8"} err="failed to get container status \"67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8\": rpc error: code = NotFound desc = could not find container \"67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8\": container with ID starting with 67307eeb8a503b602473eb0031e42d1f7d6b4c71ffb9c630444240b7a821dac8 not found: ID does not exist" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.355820 4959 scope.go:117] "RemoveContainer" containerID="90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f" Jan 21 14:33:00 crc kubenswrapper[4959]: E0121 14:33:00.356269 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f\": container with ID starting with 90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f not found: ID does not exist" containerID="90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.356358 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f"} err="failed to get container status \"90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f\": rpc error: code = NotFound desc = could not find container \"90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f\": container with ID starting with 90f6dccc9dbff7fb31bdaeef2d101661cc905ec0df60d2c5a0a00396971f630f not found: ID does not exist" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.356432 4959 scope.go:117] "RemoveContainer" containerID="28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5" Jan 21 14:33:00 crc kubenswrapper[4959]: E0121 14:33:00.356817 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5\": container with ID starting with 28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5 not found: ID does not exist" containerID="28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5" Jan 21 14:33:00 crc kubenswrapper[4959]: I0121 14:33:00.356910 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5"} err="failed to get container status \"28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5\": rpc error: code = NotFound desc = could not find container \"28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5\": container with ID starting with 28baa6fe7fee2f70ecbd2a56f2f46b3daffbe87ff9b08acf5758f475fbf922f5 not found: ID does not exist" Jan 21 14:33:01 crc kubenswrapper[4959]: I0121 14:33:01.295746 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192aee78-47b0-4910-bb04-feeb414190e2" path="/var/lib/kubelet/pods/192aee78-47b0-4910-bb04-feeb414190e2/volumes" Jan 21 14:33:06 crc kubenswrapper[4959]: I0121 14:33:06.287027 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:33:06 crc kubenswrapper[4959]: E0121 14:33:06.287791 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:33:17 crc kubenswrapper[4959]: I0121 14:33:17.286716 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:33:17 crc kubenswrapper[4959]: E0121 14:33:17.288528 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:33:29 crc kubenswrapper[4959]: I0121 14:33:29.290811 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:33:29 crc kubenswrapper[4959]: E0121 14:33:29.291869 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:33:42 crc kubenswrapper[4959]: I0121 14:33:42.287447 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:33:42 crc kubenswrapper[4959]: E0121 14:33:42.288110 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:33:56 crc kubenswrapper[4959]: I0121 14:33:56.286001 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:33:56 crc kubenswrapper[4959]: E0121 14:33:56.286767 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:34:10 crc kubenswrapper[4959]: I0121 14:34:10.286515 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:34:10 crc kubenswrapper[4959]: E0121 14:34:10.288177 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:34:25 crc kubenswrapper[4959]: I0121 14:34:25.286568 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:34:25 crc kubenswrapper[4959]: E0121 14:34:25.287555 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:34:37 crc kubenswrapper[4959]: I0121 14:34:37.737667 4959 scope.go:117] "RemoveContainer" containerID="a45342076d7d85ace0863f482f29e629474e652a06f16721597f24573d5e8d39" Jan 21 14:34:40 crc kubenswrapper[4959]: I0121 14:34:40.286681 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:34:40 crc kubenswrapper[4959]: E0121 14:34:40.287450 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:34:55 crc kubenswrapper[4959]: I0121 14:34:55.292991 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:34:55 crc kubenswrapper[4959]: E0121 14:34:55.293906 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:35:09 crc kubenswrapper[4959]: I0121 14:35:09.286254 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:35:09 crc kubenswrapper[4959]: E0121 14:35:09.287232 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:35:23 crc kubenswrapper[4959]: I0121 14:35:23.289208 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:35:23 crc kubenswrapper[4959]: E0121 14:35:23.290064 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:35:37 crc kubenswrapper[4959]: I0121 14:35:37.325936 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:35:37 crc kubenswrapper[4959]: E0121 14:35:37.326902 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:35:48 crc kubenswrapper[4959]: I0121 14:35:48.286090 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:35:48 crc kubenswrapper[4959]: E0121 14:35:48.286993 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:36:01 crc kubenswrapper[4959]: I0121 14:36:01.286529 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:36:01 crc kubenswrapper[4959]: E0121 14:36:01.287402 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:36:13 crc kubenswrapper[4959]: I0121 14:36:13.290201 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:36:13 crc kubenswrapper[4959]: E0121 14:36:13.290843 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:36:26 crc kubenswrapper[4959]: I0121 14:36:26.286356 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:36:26 crc kubenswrapper[4959]: E0121 14:36:26.287254 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:36:39 crc kubenswrapper[4959]: I0121 14:36:39.298522 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:36:39 crc kubenswrapper[4959]: E0121 14:36:39.299607 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:36:54 crc kubenswrapper[4959]: I0121 14:36:54.286634 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:36:54 crc kubenswrapper[4959]: E0121 14:36:54.287600 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:36:54 crc kubenswrapper[4959]: I0121 14:36:54.400890 4959 generic.go:334] "Generic (PLEG): container finished" podID="4e64b729-53c6-481b-8085-d5e100e34d51" containerID="689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b" exitCode=0 Jan 21 14:36:54 crc kubenswrapper[4959]: I0121 14:36:54.400951 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" event={"ID":"4e64b729-53c6-481b-8085-d5e100e34d51","Type":"ContainerDied","Data":"689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b"} Jan 21 14:36:54 crc kubenswrapper[4959]: I0121 14:36:54.401589 4959 scope.go:117] "RemoveContainer" containerID="689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b" Jan 21 14:36:54 crc kubenswrapper[4959]: I0121 14:36:54.770539 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cvdt_must-gather-bnfk7_4e64b729-53c6-481b-8085-d5e100e34d51/gather/0.log" Jan 21 14:37:02 crc kubenswrapper[4959]: I0121 14:37:02.697203 4959 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cvdt/must-gather-bnfk7"] Jan 21 14:37:02 crc kubenswrapper[4959]: I0121 14:37:02.697930 4959 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" podUID="4e64b729-53c6-481b-8085-d5e100e34d51" containerName="copy" containerID="cri-o://bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7" gracePeriod=2 Jan 21 14:37:02 crc kubenswrapper[4959]: I0121 14:37:02.710332 4959 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cvdt/must-gather-bnfk7"] Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.170756 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cvdt_must-gather-bnfk7_4e64b729-53c6-481b-8085-d5e100e34d51/copy/0.log" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.172728 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.206518 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e64b729-53c6-481b-8085-d5e100e34d51-must-gather-output\") pod \"4e64b729-53c6-481b-8085-d5e100e34d51\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.206645 4959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9wk9\" (UniqueName: \"kubernetes.io/projected/4e64b729-53c6-481b-8085-d5e100e34d51-kube-api-access-t9wk9\") pod \"4e64b729-53c6-481b-8085-d5e100e34d51\" (UID: \"4e64b729-53c6-481b-8085-d5e100e34d51\") " Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.212554 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e64b729-53c6-481b-8085-d5e100e34d51-kube-api-access-t9wk9" (OuterVolumeSpecName: "kube-api-access-t9wk9") pod "4e64b729-53c6-481b-8085-d5e100e34d51" (UID: "4e64b729-53c6-481b-8085-d5e100e34d51"). InnerVolumeSpecName "kube-api-access-t9wk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.308992 4959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9wk9\" (UniqueName: \"kubernetes.io/projected/4e64b729-53c6-481b-8085-d5e100e34d51-kube-api-access-t9wk9\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.416407 4959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e64b729-53c6-481b-8085-d5e100e34d51-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e64b729-53c6-481b-8085-d5e100e34d51" (UID: "4e64b729-53c6-481b-8085-d5e100e34d51"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.483351 4959 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cvdt_must-gather-bnfk7_4e64b729-53c6-481b-8085-d5e100e34d51/copy/0.log" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.483979 4959 generic.go:334] "Generic (PLEG): container finished" podID="4e64b729-53c6-481b-8085-d5e100e34d51" containerID="bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7" exitCode=143 Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.484054 4959 scope.go:117] "RemoveContainer" containerID="bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.484240 4959 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cvdt/must-gather-bnfk7" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.508427 4959 scope.go:117] "RemoveContainer" containerID="689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.513835 4959 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e64b729-53c6-481b-8085-d5e100e34d51-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.583261 4959 scope.go:117] "RemoveContainer" containerID="bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7" Jan 21 14:37:03 crc kubenswrapper[4959]: E0121 14:37:03.585055 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7\": container with ID starting with bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7 not found: ID does not exist" containerID="bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.585126 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7"} err="failed to get container status \"bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7\": rpc error: code = NotFound desc = could not find container \"bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7\": container with ID starting with bf2a64243ec5e18f942f01cbe24c0066b2eb9f41d8badb28c0d39763fb9c37c7 not found: ID does not exist" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.585161 4959 scope.go:117] "RemoveContainer" containerID="689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b" Jan 21 14:37:03 crc kubenswrapper[4959]: E0121 14:37:03.585623 4959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b\": container with ID starting with 689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b not found: ID does not exist" containerID="689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b" Jan 21 14:37:03 crc kubenswrapper[4959]: I0121 14:37:03.585677 4959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b"} err="failed to get container status \"689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b\": rpc error: code = NotFound desc = could not find container \"689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b\": container with ID starting with 689a8efe2ad0cd03e6fc1d7a7d8d935200cbd1a5a49dac2974c5f59ad27d679b not found: ID does not exist" Jan 21 14:37:05 crc kubenswrapper[4959]: I0121 14:37:05.301045 4959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e64b729-53c6-481b-8085-d5e100e34d51" path="/var/lib/kubelet/pods/4e64b729-53c6-481b-8085-d5e100e34d51/volumes" Jan 21 14:37:07 crc kubenswrapper[4959]: I0121 14:37:07.287416 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:37:07 crc kubenswrapper[4959]: E0121 14:37:07.288817 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:37:20 crc kubenswrapper[4959]: I0121 14:37:20.286627 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:37:20 crc kubenswrapper[4959]: E0121 14:37:20.287529 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:37:33 crc kubenswrapper[4959]: I0121 14:37:33.285868 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:37:33 crc kubenswrapper[4959]: E0121 14:37:33.287522 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:37:44 crc kubenswrapper[4959]: I0121 14:37:44.335422 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:37:44 crc kubenswrapper[4959]: E0121 14:37:44.336271 4959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwkrl_openshift-machine-config-operator(00d99d89-7cdc-410d-b2f3-347be806f79a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" podUID="00d99d89-7cdc-410d-b2f3-347be806f79a" Jan 21 14:37:56 crc kubenswrapper[4959]: I0121 14:37:56.286405 4959 scope.go:117] "RemoveContainer" containerID="fef204290b358bfe2f49f81345c1984b46e165120796aaa3a9ad3a21729ec96c" Jan 21 14:37:56 crc kubenswrapper[4959]: I0121 14:37:56.987384 4959 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwkrl" event={"ID":"00d99d89-7cdc-410d-b2f3-347be806f79a","Type":"ContainerStarted","Data":"4945f45b4d032d9624dc5950addc889f48f1bfc68388ec30a6a1263038f9073c"} Jan 21 14:38:37 crc kubenswrapper[4959]: I0121 14:38:37.871425 4959 scope.go:117] "RemoveContainer" containerID="c8f75b45e4468bc1b78a48629b515999c041c21eb076251a679ec79068ce4cdd" Jan 21 14:38:37 crc kubenswrapper[4959]: I0121 14:38:37.902304 4959 scope.go:117] "RemoveContainer" containerID="5baba32726e0ea3351bc03b02c61caba774e604990dba76ac080578f6a25603f" Jan 21 14:38:37 crc kubenswrapper[4959]: I0121 14:38:37.930244 4959 scope.go:117] "RemoveContainer" containerID="ead43ab31ae2193e2ab5eb11caabee5595835f6493817fba76aaad4556ffef6d"